DAI#59 – APIs, Dead Bills, and NVIDIA Open

Welcome to our weekly roundup of artificial intelligence news.
This week OpenAI gave out API gifts.
California’s AI safety bill defeated.
NVIDIA blew us away with its powerful open model.
Let’s dig a little deeper.
The agents are coming
OpenAI didn’t announce any new models (or Sora) at its Dev Days event, but developers are excited about the new API capabilities. Realtime APIs will change the game, allowing for smarter applications that talk to users and even act as agents.
The demo is really cool.
There had been rumors that OpenAI would take the “profit” route and grant Sam Altman billions of dollars in equity, but Altman denied these rumors. Even so, the company is pushing for more investment, expecting some return on its cash.
Apple has integrated OpenAI’s models into its devices but withdrew from its latest funding round open artificial intelligence It is expected to raise approximately $6.5 billion.
We’re not sure why Apple doesn’t want a piece of the OpenAI pie, but it may have something to do with its new developments apple intelligence. Perhaps this has something to do with Sam Altman’s need for exclusivity.
Tell me you don’t have a moat without telling me you don’t have a moat. pic.twitter.com/3I18MosvOg
— Pedro Domingos (@pmddomingos) October 3, 2024
kill bill
Gavin Newsom must decide whether to impose a safe rev limit on AI developers or let them go all out. Ultimately, he decided to veto California’s SB 1047 artificial intelligence safety bill, and provided some interesting reasons.
Have we really reached the point where we face real risks from artificial intelligence?
Well, things escalated quickly. pic.twitter.com/aLZn4blS8G
— AI Notkilleveryoneism Memes ⏸️ (@AISafetyMemes) September 30, 2024
Newsom signed a slew of AI bills last month that address deepfakes, AI watermarking, child safety, performers’ AI rights and election misinformation. Last week he signed AB 2013 which will really change things for LLM creators.
The bill says that on or before January 1, 2026, developers would be required to provide a high-level summary of the training data set for any model made after January 1, 2022, if the model is made available in California. Some of these requests may reveal some dirty secrets.
More EU AI regulations
The EU is clearly more concerned about the safety of AI than the rest of the world. Or they just enjoy writing and passing legislation. This week, they launched a project to write a code of practice for artificial intelligence in an attempt to balance innovation and safety.
When you see the leader of a security technology group, it’s clear which way they’re going to lean.
Liquid foundation model
The Transformer model brought us ChatGPT, but there’s been a lot of debate lately about whether they can deliver the next leap forward in artificial intelligence. A company called Liquid AI is changing that with its Liquid Foundation Models (LFM).
These are not typical generative AIs. LFM is specifically optimized to manage data with longer contexts, making it ideal for tasks where sequential data such as text, audio, or video must be processed.
LFM achieves impressive performance with smaller models, less memory, and less computation.
NVIDIA Open
Nvidia just dropped a game-changer: an open source AI model that goes head-to-head with big players like OpenAI and Google. Their new NVLM 1.0 series, led by the flagship 72B parameter NVLM-D-72B, excels in visual and language tasks while also improving text-only capabilities.
With open weighting and NVIDIA’s commitment to releasing code, it’s becoming increasingly difficult to justify paying for proprietary models in many use cases.
Just say you know
A new study finds that state-of-the-art large language models (LLMs) are less likely to admit when they don’t know the answer to a user’s question. When users pose questions to these models, they are more likely to make up some questions rather than admit they don’t know the answer.
The study highlights the need for fundamental shifts in the design and development of general artificial intelligence, especially when it is used in high-risk domains. Researchers are still trying to understand why AI models are so eager to please us rather than say, “Sorry, I don’t know the answer.”
Inside artificial intelligence
It seems like everyone is labeling their products “artificial intelligence” to attract customers. Here are some AI tools that are really worth checking out.
blue dot: Record, transcribe, and summarize your meetings with AI-generated notes, no bots needed.
jide:Guidde magically transforms your workflow into a step-by-step video guide, complete with AI-generated voiceover and professional-grade visuals, all in just a few clicks.
In other news…
Here are some other click-worthy AI stories we loved this week:
This is a package.
If you live in California, we’d love to know how you feel about SB 1047 being defeated. Is this a missed opportunity for AI safety, or a positive step towards getting us to general AI soon? With a powerful open source model like NVIDIA’s new blockbuster, regulating LL.M.s is going to be a lot harder anyway.
OpenAI’s real-time API is the highlight of the week. Even if you’re not a developer, the prospect of interacting with smarter customer service bots that talk to you is pretty cool. Unless you work as a customer service agent and you want to keep that job.
Let us know what you think, follow us on X, and send us links to cool AI content we may have missed.