This post was flagged by the community and is temporarily hidden.
May I share this with my science department heads?
I don´t think that something, that is called our days “AI”, should be used to do scientific researches!
I will give a +1 to Consensus
I found it on ProductHunt recently and immediately saw its potential. I contacted the developers with some suggestions and they have been quite receptive.
My use so far has only been for searching medical topics. It returns a distinctly different set of responses than searching PubMed or Google Scholar. Its AI has an uncanny ability to find articles where the abstract answers a specific question.
It is still a work in progress. For example, if you search for “Does Treatment X Help Y” then you may get a hit that highlights the phrase “Treatment X Helps Y” and then when you go to the actual citation you realize that is a rhetorical question asked in the paper and the conclusion actually says that the treatment does NOT help Y. But that’s still just fine because the whole point for me is to find articles which prove or disprove a given hypothesis.
I have suggested to them a number of feature additions including Zotero translator integration and export of results to Google Sheets. I suspect in time these will be added.
I would imagine Consensus would quickly become a regular go-to site for anyone who uses academic literature in whatever field.
Think of it as similar to Devonthink but for searching peer-reviewed literature.
It doesn’t do the research; rather it helps you find pertinent research papers.
Thanks for posting this. I spent this morning researching a medical condition for a friend. Just now, I searched using Consensus for the same condition and feel that this could have saved me considerable time. I also found some abstracts that I didn’t find this morning. Will continue my testing of Consensus and provide feedback through the site.
I do, and Devonthink is working for me with its so called “AI” in less than 5% of all cases!
Are you saying people shouldn’t use AI for research because AI doesn’t work?
This is fun. Is there a way to exclude or downweight keywords or concepts? For example, “does drinking water reduce blood sugar” (just a test example) mostly gives results about alcohol and I’d rather not see those.
Then I assume you don’t use Google search? Well, there are still libraries if you want to go through the stacks.
Yes, that would pretty much sums it up.
The big problem on that, there is not artificial “Intelligence” out there. Those “AI” are all just (complex) kinds of software, that was written by one or more human beings. And a human being is (fortunately!!) making mistakes, everyone!
If you start to Rely on a system like that, for your scientific output, there is a high risk, that a failure made during the programming of the “AI”, will result in mistakes and failures with the “scientific” output.
If one Scientist, or a Working Group is making a mistake, that is a normal scientific process, and would be catched up in a lot of cases due to the publication processes.
But, if there is “AI”, the chances are very high, that large groups of Scientists would be working with that, or would let the “AI” do their jobs, or (big) parts of it.
This will multiply every wrong line of code by high digits, and (c)would result into unpredictable damages and risks to all of us.
Google is a simple search engine, not an “AI”!
The thing is, the AI isn’t making conclusions here - it’s just returning research papers. It’s basically doing the job of a research assistant, which has historically been done by students.
The actual scientists are the ones reading the resulting articles, crafting experiments to confirm hypotheses, and making conclusions.
If the AIs start writing the papers then I’d get concerned.
I was of the impression that Google, minimally, used machine learning in its algorithms - which is a subset of AI, isn’t it?
Following this logic, we should not be using software at all, for any purpose. I admit that is a defensible position but it hardly seems at home on a forum such as this.
AI is a field in Computer Science and is also used as a (marketing) term. AI consists of: machine learning, deep learning, neural networks, voice search, computer vision, image recognition, natural language generation (NLG), and natural language processing (NLP). Google and many other search engines probably use many of these. Using AI for research is certainly very important, I guess many breakthrough’s would not even have been possible without AI.
This is a nice read :
There is a nice set of podcasts made by the academic mathematician and media person Hannah Fry.
They certainly opened my eyes to the academic and research potential of AI and demystifies a lot of a vague and often alarmist use of the term. Many of the ethical considerations that @Ulli references here are broadly discussed by the researchers at DeepMind. Worth a listen.
I wonder how Consensus works with humanities and social science literature. In my experience the cutting edge stuff tends to entire ignore the ‘softer’ sciences, which is a shame. Happy to try this out, thanks for the heads-up.
A simple search engine?
Have you considered why Bing and Google give different results? I’m not sure you have a strong enough grip on what AI is to be throwing stones at this…
I’m grateful to learn about this tool.
So far, I find that it doesn’t seem to incorporate the type of intelligence that I would like. For example, there’s no weighting of studies that incorporate randomized design over those that don’t. Usually, I’m not looking for a consensus in the popular literature. I’m looking for the best evidence.
When I search for: “Does intermittent fasting help with weight loss?” The information that Consensus returns indicates that this diet actually works, or at least is very promising. It doesn’t home in on those few studies that used randomized designs to evaluate the treatment. As far as I know, those studies have found no benefit for this diet over other diets for losing weight, and it is those studies I would want know about immediately.
I think there are two potential levels of AI in literature search:
(1) Find articles which address a stated question
(2) Search within the found articles to determine which are the most credible
It seems to me that Level (1) is what Consensus does - and in my use so far, it is more efficient/effective than PubMed or Google Scholar in doing so.
While (2) would be nice I do not believe Consensus (or any other search engine) makes any attempt to do that. Moreover I need to think a bit about whether I would even want software to do that. If I am using information professionally (in academics, in medicine, as a legal consultant, or any other professional use) I probably want to be aware of the entirety of the literature base, good or bad, on the subject at hand.
For a consumer-oriented search engine it may well be desirable for AI to pick the most credible article. But I am not sure that is desirable behavior for professionally-oriented or academically-oriented searches.
Question (not just for @rkaplan ): Is there a reason a tool such as this can’t do both? If it can find the most credible results, then it would seem it has some metric to do so. Could that metic be presented along will each result returned?