Did Anthropic Claude 100K just beat OpenAI?
Anthropic might have beaten OpenAI in its own game by releasing 100,000 tokens (hundred thousand tokens) of context window, which is mind-blowing. I don’t know any model that has got it, and in this video, I would like to break it down for you to understand why Anthropics might be closer to OpenAI than we have all felt before. Let’s get started. This is a new announcement from Anthropics that says introducing 100K context windows. Anthropics is a company that has been making large language models, and their very popular large language model is called Claude. So Claude usually had a 9k (9000 token) context window, but what they have done today as of today is they have upgraded the 9000 in token context window to 100,000 tokens, which is approximately 75,000 words. This is insane because now businesses can submit hundreds of pages of materials for Claude to digest. People can analyze it, have conversations, and ask questions to go with Claude, whatever they want to do.
They’ve given you a bunch of data around what 100,000 tokens mean, but the point here is that when they tested Anthropics Claude with The Great Gatsby, it responded within 22 seconds when they gave 72,000 tokens, a modified one-liner to say “Mr. Caraway was a software engineer that works on machine learning tooling at Anthropics.” When they asked this model to spot what was different, it responded within 22 seconds, which means it could read the entire book and then respond back in 22 seconds, which is not a human thing. This is quite amazing because now you can start using Claude as a business analyst. For example, you want to invest in a company and ask Claude to do the analysis for you. You can tell Claude to help you provide a summary or highlight important items for the potential investor. Claude can go through all these things one by one and give you the highlights and details that are almost not easily accessible for any human being unless you have a corporate lawyer with you.
This is quite exciting; the things that you can do are really amazing. For example, 100K roughly translates into 6 hours of audio, which means you can just plug in 6 hours of audio, translate it into text, and then ask Claude to answer. This is super amazing; you can do summarization, digest summarization, you can ask it to analyze documents. For example, if you want to do investment or look at legal details that lawyers only do and charge a lot of money, you can ask Claude to do it. In fact, you can make Claude your programming assistant. For example, you want to tune your developer or want to go through an API document. What if that Junior developer is Claude and you can ask Claude to do the code companionship by just going through a large document? This is simply amazing; Claude is digesting the Line Chain document, and then you ask to create a simple demo of Lang Chain. Now it can code the Lang Chain just by learning it from the documentation that was uploaded. This is simply amazing. I think Claude or Anthropics is really a good tough competition for OpenAI at this point. In fact, a lot of enterprises might prefer Anthropics over OpenAI because they’ve got the model in place and they’ve got a decent pricing as well. I’m not doing a pricing comparison here; they’ve got a decent pricing. If you want to use the 100K model as an API, it’s just one argument change. All you have to do is change the existing model to the new Claude v100K model, and you can use the new 100K model like this. This is or probably Anthropics has beaten OpenAI in its own game by going to the next level of what these large language models can do. I would like to see what OpenAI is going to do next. So OpenAI, it’s your turn.