And that it takes queries in natural language
Well I wouldnât say, that it âtakes queriesâ.
It just completes the sentences you begin with in a most natural way.
So, if you give it a query it gives you an answer, because an answer is following on a query, not because ChatGPT is understanding that you had given it a query.
Itâs quite a bit more sophisticated than that. You can ask for not only specific information but also the genre/style of output or specific technical specs for output and it responds masterfullly - everything from âIn the style of the Bibleâ to âwith a humorous toneâ to âin language a 5 year old can understandâ to specific specifications for CSV-compliant data or HTML encoding
Thatâs extraordinary. Nothing else can match that breadth of capability.
Yes, and after all, it is just completing your sentence, nothing more!
It is not capable, to give you specific informations, that are more than just a logic continuation of your input.
And yes, you could have a âconversationâ with it, but this conversation has the same quality of information, as if you are talking to someone from an other country, without knowledge of their language, and both of you are speaking their own language to their best knowledge.
There would be no information transfer within this conversation, and their is no information transfer, if ChatGPT is âansweringâ you, because ChatGPT is simply not aware that it is, or should, give you a specific information.
I donât know if it is ready to replace Wikipedia or Google. Then I donât think the current Wikipedia or Google is ready to replace Wikipedia or Google if you get my sarcastic drift. I would say this was the wrong question in some way.
Google is so manipulated now as to be unreliable, unless you already know enough not to need it as it were. It will take you back to stuff you know I guess in a sense and I do use it a lot that way still.
Itâs quite a bit more capable than that
There are topics on which I will make an assertion and it debates me to show I am wrong.
It has particular strength in writing short scripts in various computer languages - for sure it is doing more than completing my sentence because it is creating code that I did not know how to create previously. Sometimes the code isnât perfect but itâs way easier for me to fix that than to write it from scratch.
It is not without fault. Indeed I think its biggest weakness is when it does not know the boundaries of its knowledge and it thus fabricates stuff. And its inability to output an accurate reference for the things it says is a big weakness as well.
Indeed those weaknesses prevent me from using it in actual production use now. But the same is true of most beta software. I think their demo shows they have made a huge breakthrough in the hardest part of the project. Itâs way more than elementary chat boxes of prior efforts.
But even if it is doing so, it is not a result of an intentional debate, but a pure luck that it picked those words, and no others!
Sure, because the possibilities are not endless in how to use code. There are way higher chances to âcreateâ a working form of the code, than within other languages.
But also that is just a random result, not intended by the software, nor the developer of ChatGTP.
I do recognize that early AI systems were basically pattern generation with a fairly limited template of potential responses.
But if you experiment with ChatGPT for a while, it is evident that it can combine concepts in novel ways. Surely nobody trained ChatGPT to produce biblical verse about a peanut butter sandwich stuck in a VCR, but it pulls it off masterfully.
While I understand the argument that it is âjust pattern recognition,â I could also argue that the human brain is âjust binary Sodium and Cacium channels.â Arguably the biggest scientific question of all time is how those ion channels make the leap to consciousness and original thought; and the answer is that nobody knows.
I do not suggest for a moment that ChatGPT is sentient. It may well be true that it is âjust pattern recognitionâ or that it has a finite number of responses from which to choose. Even if so, it clearly has enough patterns to choose from to make it extremely useful.
While there are still problems the ChatGPT engineers need to solve before it can be considered production-ready, I believe the tasks that remain are considerably less complex than the tasks their team have already accomplished. I have never seen another AI system which I felt had advanced that far. This is why I believe ChatGPT will ultimately be viewed to be as much of a game-changer as the printing press.
Julia Angwin at The Markup, one of the most incisive journalists writing on algorithms and society these days, has a good interview with Princetonâs Arvind Narayanan this weekend about AI and language models like ChatGPT:
A human bullshitter doesnât care if what theyâre saying is true or not; they have certain ends in mind. As long as they persuade, those ends are met. Effectively, that is what ChatGPT is doing. It is trying to be persuasive, and it has no way to know for sure whether the statements it makes are true or not.
But thereâs more, including a glimpse of Narayananâs AI taxonomy, which is a very helpful tool. Edit: And heâs not all down on AI, he sees some uses, and counters some of the doomsday predictions. Itâs a good read.
Code is an interesting case. When Iâve asked it to write code in python (which I know reasonably well), it does a good job.
This makes sense. There are a ton of examples of python code on the web. While you can do things in multiple ways in python, it has much less flexibility than English, for example; one thing (statement, command, character) tends to follow from another much more reliably than in English. Outside a comment or string literal, after print
comes either (
(python 3) or a space (python 2); other examples are probably statistically insignificant at the scale of its training data. So a language model, which essentially determines what to put next by what itâs seen, works quite well.
Iâve only tried simple scripts so far. But the fact is that programming explicitly relies on repeatable and repeated syntactical and structural patterns. Those patterns are far, far less flexible than human language.
But ChatGPT is also impressively capable in its ability to interpret natural language requests - which as you say have far less predictability than code.
Your Narayanan link is a good one - and related to this excellent summary as well:
Very interesting - turns out Since ChatGPT is open source with an API, there are infinite ways to tweak it - including adding the ability for it to reference accurate links on the web for PubMed or anything else
FWIW ChatGPT isnât open source
Fair enough - but it has an API and supports plugins so it is quite extensible/customizable.