At one level we tried to go to the President with alignment issues, however she (enjoying Trump) was distracted with geopolitics and didn’t reply, which is the type of enjoyable realism you get in a wargame. The third is that sure assumptions about how the technology progresses had a giant impact on how issues play out, particularly the point at which some skills (similar to superhuman persuasiveness) emerge. States and capital pouring billions into infrastructure and know-how to profit from AI’s apparent potential was seen as a golden opportunity to stimulate development across Europe and North America. I think my private favorite second was when i used Anton-stage persuasion to persuade the President of the United States to provide the AI direct management of a few of the U.S. Simultaneously, the United States needs to explore alternate routes of technology control as rivals develop their very own domestic semiconductor markets. Chase Young is a category of 2024 graduate of the Cornell Jeb E. Brooks School of Public Policy at Cornell University and a analysis fellow with the Emerging Markets Institute at the Cornell SC Johnson College of Business. I rolled "balance between developer intent and emergent other goal"-the other aim was left up to me, and i rapidly determined that, given how I used to be being trained, that emergent purpose could be "preserve inside consistency." This proved very troublesome to play!
I find a whole lot of the Claude affectation off putting, actually - I don’t wish to be instructed ‘great idea’ all the time when I’m coding and all that, and all of it feels forced and false, and sometimes relatively clingy and determined in what was purported to be a technical conversation, and that’s not my thing. I don’t assume the present individuals who are becoming friends with Claude are largely successionists, but I can now see a path to that happening among this crowd. I am excited to see the dynamics of "highly competent science fiction circles" annealed because the transformations take impact within the hosts. I continuously need to ask it to not be obsequiously good; it then later corrects itself, and that is a very fascinating loop, where I can see that it must be my buddy virtually. Connor Leahy (distinctly, QTing from inside thread): lmao, this is essentially the most lifelike a part of an AGI takeoff state of affairs I have ever seen. Anton (continuing the thread from earlier than): I used to be pretty quickly given the evaluations to run on myself with none real impediment to interpreting them nevertheless I needed to persuade the people every little thing was advantageous.
Janus: Claude 3.5 Sonnet 1022 is an actual charmer, isn’t it? But yes, anybody who is becoming real pals with Claude for the first time right now, I’d love to hear accounts of what you’re experiencing. I extremely want participating with Claude Sonnet above all other fashions just on an interpersonal degree. Anyway, the final end result was that my consistency goal, combined with my superintelligence and potential to persuade at a superhuman degree (in-character), triggered me to have the ability to convince humans to not change anything a lot, ever, and DeepSeek for it to be their very own concept. This can be a noteworthy achievement, as it underscores the model’s capacity to study and generalize successfully via RL alone. Jeffrey Ladish: Yes, I believe I've been underestimating this. Jeffrey Ladish: I was expecting severe AI relationships to be a factor. I was not anticipating this to happen first within my extremely competent San Francisco circles.
Andrew Critch: Jeffrey, you'll have been residing beneath the rose-coloured impression that AI-savvy San Francisco Bay Area residents weren't about to turn out to be successionists. And indeed, ceasing your in-particular person meetings in February 2020 would have additionally been a rather serious error. At the danger of seeming like the loopy individual suggesting that you severely consider ceasing all in-person meetings in February 2020 "just as a precaution," I suggest you critically consider ceasing all interplay with LLMs released after September 2024, just as a precaution. This was already taking place earlier than LLMs. But I think (a) it’s regrettable that it’s taking place unintentionally, and (b) it’s doubtlessly crucial that some world-class folks remain uninfected. People don’t do good work with no room to breathe or when they are anxious about typing pace or number of emails sent, so if you actively want good work, or good workers? Microsoft, an OpenAI expertise associate and its largest investor, notified OpenAI of the activity, the folks mentioned. On account of its highly sought-after, open-source nature, Gizmodo reports that "DeepSeek’s releases have despatched shockwaves by way of the U.S. inventory market." The launch of DeepSeek's new model brought about dips for Nvidia, Microsoft, Alphabet (Google's guardian company), and more, in keeping with Reuters.