Having said that: I think there’s a product here, and some lessons to learn. Perhaps the authors eventually apply them to SpacetimeDB v3 and launch a more resilient and LLM-friendly database, where application code is isolated and can run for as long as it needs, without possibly affecting other application code running locally, even when faced with serious implementation bugs; where transactions can run for as long as they need without affecting the performance of other transactions; where they’re implicitly throttled if they’re taking too long, if the LLM did not provide an optimal query plan. Perhaps we’ll see a system that is much more resilient to failure, but with much less “impressive performance”; perhaps the system will be trivially distributed so that the AI agent doesn’t have to plan a distributed system itself; perhaps it will launch with fewer silly benchmarks and with more technical details.
Both of these approaches work, up to a point. But both have fundamental limitations that become painfully obvious when you're building real-world, long-running agents.
,这一点在wps中也有详细论述
В Венгрии обвинили Украину в попытках добиться энергетической блокады14:56
Назван самый популярный вид вклада у россиян08:59
“十五五”时期,新能源汽车产业如何加快转型?实现高质量发展的主要着力点有哪些?记者连线会内会外,邀请代表委员、专家共同探讨。