A monthly recap of some news, announcements, beta leaks, and other interesting Generative AI action that caught our attention. Join our newsletter and receive this recap straight to your Inbox.
Proudly curated by Dr. Daniel Moskovich.
Past issues:
This month’s highlights:
- NVIDIA became the third company in history to be valued at over $2 trillion. Founder & CEO Jensen Huang has made several outstanding choices, including using the “dead” language Ada (and a restricted version of Ada called Spark) for NVIDIA firmware. Ada is a language that is considered tedious to program in, but it is secure and type-safe, making it perfect for security and safety-critical use cases.
- Aside from Project Titan, it seems that (almost) everyone is talking about Google Gemini, primarily focusing on its political alignment and omission of white men from image generation. This, and more so Google’s reaction, reinforces my opinion that Google is thinking in terms of advertisers and investors as opposed to consumers, which likely means losing the ‘LLM war’ to OpenAI, Mistral, and other companies. Still, Google is a powerhouse, so never say never..
- Microsoft quickly came out with LongRoPE, an open-source extension method to deal with contexts of up to 2.5m tokens. It seems context length is quickly becoming less of a bottleneck for LLM adopters.
- Handling long contexts is quite the race now, and the University of Berekely released LangWorldModel with its 1m token context, using a new technique called “RingAttention”.
- OpenAI came out with Sora (Japanese for “sky”), which generates video from images. This is again a mixed bag, because the model clearly doesn’t understand physics and can create complete garbage, but it’s technically fantastic and surely a time-saver for quick video creation! Particularly fascinating to me is that it doesn’t generate a next frame as a “next token”, but rather iteratively generates the entire video at once.
- Despite their recent leaks, Mistral‘s latest and greatest is closed-source. It’s competitive with GPT-4, if not quite as good. Maybe it’s better with some European languages.
- Lightricks (full disclosure: worked there, years ago) released LTX Studios, which generates video from text on an iPhone, integrating with other Lightricks tools. I continue to be amazed by how quickly Lightricks can integrate new tools and techniques – the wonders of great code design!
- Alibaba release Qwen1.5. This places the Chinese solidly in the race for the greatest LLM, because it is competitive with GPT-4, while having much better multilingual capabilities. Indeed, Qwen1.5 can function seamlessly across several languages which sets it apart IMO.
- Consequent AI performs a study to establish which LLM’s can reason, as opposed to just recalling results from their training data. Spoiler: GPT-4 wins.
- NVIDIA released Chat with RTX, which is a framework for placing a local LLM on a Windows machine with a good NVIDIA GPU, to query against your file system. This is an attractive concept… you can choose whether to use Llama-2 or Mistral as your LLM backend.
- Klarna says it is using an AI assistant developed with OpenAI to replace 700 customer service representatives. This is a manifestation of the “nightmare” of AI taking everyone’s jobs. I wonder how “robo customer service using generative AI” will play out in the long term – will the human touch gain in value, or will it vanish?
That’s for now. Did you see something of interest in the domains of Generative AI? Drop us a note and we’ll be happy to include it in our monthly round-up 🙂