Welcome to the Off-Shore Club

The #1 Social Engineering Project in the world since 2004 !

Important Notice:

✅UPGRADE YOUR ACCOUNT TODAY TO ACCESS ALL OFF-SHORE FORUMS✅

[New]Telegram Channel

In case our domain name changes, we advise you to subscribe to our new TG channel to always be aware of all events and updates -
https://t.me/rtmsechannel

OFF-SHORE Staff Announcement:


30% Bonus on ALL Wallet Deposit this week For example, if you deposit $1000, your RTM Balance will be $1000 + $300 advertising wallet that can be used to purchase eligible products and service on forums or request withdrawal. The limit deposit to get the 30% bonus is $10,000 for a $3000 Marketplace wallet balance Bonus.

Deposit Now and claim 30% more balance ! - BTC/LTC/XMR


Always use a Mixer to keep Maximum anonimity ! - BTC to BTC or BTC to XMR

🗂️Keep in Mind Google's Co-Founder Says AI Performs Best When You Threaten It

Gold

_=*Croft*=_

Business Club
💰 Business Club
USDT(TRC-20)
$0.0
Artificial intelligence continues to be the thing in tech—whether consumers are interested or not. What strikes me most about generative AI isn't its features or potential to make my life easier (a potential I have yet to realize); rather, I'm focused these days on the many threats that seem to be rising from this technology.

There's misinformation, for sure—new AI video models, for example, are creating realistic clips complete with lip-synced audio. But there's also the classic AI threat, that the technology becomes both more intelligent than us and self-aware, and chooses to use that general intelligence in a way that does not benefit humanity. Even as he pours resources into his own AI company (not to mention the current administration, as well) Elon Musk sees a 10 to 20% chance that AI "goes bad," and that the tech remains a “significant existential threat." Cool.

So it doesn't necessarily bring me comfort to hear a high-profile, established tech executive jokingly discuss how treating AI poorly maximizes its potential. That would be Google co-founder Sergey Brin, who surprised an audience at a recording of the AIl-In podcast this week. During a talk that spanned Brin's return to Google, AI, and robotics, investor Jason Calacanis made a joke about getting "sassy" with the AI to get it to do the task he wanted. That sparked a legitimate point from Brin. It can be tough to tell exactly what he says at times due to people speaking over one another, but he says something to the effect of: "You know, that's a weird thing...we don't circulate this much...in the AI community...not just our models, but all models tend to do better if you threaten them."

The other speaker looks surprised. "If you threaten them?" Brin responds "Like with physical violence. But...people feel weird about that, so we don't really talk about that." Brin then says that, historically, you threaten the model with kidnapping. You can see the exchange here:


The conversation quickly shifts to other topics, including how kids are growing up with AI, but that comment is what I carried away from my viewing. What are we doing here? Have we lost the plot? Does no one remember Terminator?

Jokes aside, it seems like a bad practice to start threatening AI models in order to get them to do something. Sure, maybe these programs never actually achieve artificial general intelligence (AGI), but I mean, I remember when the discussion was around whether we should say "please" and "thank you" when asking things of Alexa or Siri. Forget the niceties; just abuse ChatGPT until it does what you want it to—that should end well for everyone.

Maybe AI does perform best when you threaten it. Maybe something in the training understands that "threats" mean the task should be taken more seriously. You won't catch me testing that hypothesis on my personal accounts.

Anthropic might offer an example of why not to torture your AI​


In the same week as this podcast recording, Anthropic released its latest Claude AI models. One Anthropic employee took to Bluesky, and mentioned that Opus, the company's highest performing model, can take it upon itself to try to stop you from doing "immoral" things, by contacting regulators, the press, or locking you out of the system:

welcome to the future, now your error-prone software can call the cops (this is an Anthropic employee talking about Claude Opus 4)[image or embed]

— Molly White (@molly.wiki) May 22, 2025 at 4:55 PM

The employee went on to clarify that this has only ever happened in "clear-cut cases of wrongdoing," but that they could see the bot going rogue should it interpret how it's being used in a negative way. Check out the employee's particularly relevant example below:

can't wait to explain to my family that the robot swatted me after i threatened its non-existent grandma[image or embed]

— Molly White (@molly.wiki) May 22, 2025 at 5:09 PM

That employee later deleted those posts and specified that this only happens during testing given unusual instructions and access to tools. Even if that is true, if it can happen in testing, it's entirely possible it can happen in a future version of the model. Speaking of testing, Anthropic researchers found that this new model of Claude is prone to deception and blackmail, should the bot believe it is being threatened or dislikes the way an interaction is going.

Perhaps we should take torturing AI off the table?
Full story here:
 

Create an account or login to comment

You must be a member in order to leave a comment

Create account

Create an account on our community. It's easy!

Log in

Already have an account? Log in here.

Friendly Disclaimer We do not host or store any files on our website except thread messages, most likely your DMCA content is being hosted on a third-party website and you need to contact them. Representatives of this site ("service") are not responsible for any content created by users and for accounts. The materials presented express only the opinions of their authors.
🚨 Do not get Ripped Off ! ⚖️ Deal with approved sellers or use RTM Escrow on Telegram

Panel Title #1

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.

Panel Title #2

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.
Top