An open-source alternative using a similar kind of large language model technology as chatGPT does is now available.

From Together:

OpenChatKit provides a powerful, open-source base to create both specialized and general purpose chatbots for various applications.

It’s thus entirely customisable by anyone who knows what they’re doing. Which isn’t a tiny barrier - it’s not like some app you can just download to your computer and double-click on. But it opens up development and contribution up to a wide range of people who otherwise might not have access to this technology, as well as innate transparency as to how the model works and what data it’s using.

Perhaps one day OpenChatKit will be to chatGPT as Stable Diffusion is to Midjourney. In the mean time, you can play with it via the web and give feedback to the developers here.

It’s clearly not as comprehensive as chatGPT as by default, but, who knows, one day it might get there if it takes off enough such that people who have the relevant data or expertise contribute.

It also has a fine-tuning mechanism such that you can create chatbots for specific applications. Examples they’ve worked on include a chatbot to help students learn from the contents of textbooks or one trained on financial data that can answer questions about finance.

This happened on roughly the same week that the creators of the actual chatGPT, OpenAI, move ever further in the direction of ignoring their company’s name and what appeared to be their original mission in order to take a much more closed and secretive approach when it comes to sharing the details of their GPT-4 software.

One of the founders, research director Ilya Sutskever, gives the reason for this decision as being something to the effect of AI being just too powerful and scary for the hoi polloi to have access to.

The Verge quotes Sutskever:

Flat out, we were wrong. If you believe, as we do, that at some point, AI -AGI - is going to be extremely, unbelievably potent, then it just does not make sense to open-source. It is a bad idea… I fully expect that in a few years it’s going to be completely obvious to everyone that open-sourcing AI is just not wise.

It’s a controversial opinion to some. And possibly the idea of this being the sole motivator is confounded by other changes in the direction of their organisation. Their GPT-4 technical report also mentions “the competitive landscape” as one reason why they are not sharing many details.

In any case, whilst it’s easy to see the intuition behind that kind of argument (if you are happy to accept OpenAI and their rivals as being 100% trustworthy caretakers of AI use and policy for the good of all in society) I can’t really imagine that “not telling us how your powerful technology works” is the best method of saving humanity from itself in the long term.

Or even possibly all that effective on its own terms. After all, the source code for the Facebook equivalent was leaked within a couple of weeks of its release.