ChatGPT predicts tremendous role for ChatGPT in UK governmentPivot to AIThe Tony Blair Institute is the think tank run by the former Prime Minister of the UK, Tony Blair. The TBI makes $140 million a year consulting for governments. The Guardian called the TBI “one of …
Amazingly, the TBI predicts that AI will take over a huge number of jobs! “More than 40 per cent of tasks performed by public-sector workers could be partly automated by a combination of AI-based software.” The UK government should pay £4 billion ($5.1 billion) per year to implement this.
Where did they get these shocking results? Well … they asked ChatGPT. Yes, really. They asked the chatbot what it thought about all sorts of areas of the UK labor market and then wrote up its answers as a paper. The authors admitted to 404 Media that “making a prediction based on interviews with experts would be too hard.” [404 Media, archive]
The paper was put up on the web last week but disappeared for some reason. Fortunately, a copy was archived. [Paper, PDF, archive]
This is the use case for ChatGPT: producing text that nobody wanted to write or read, published by well-paid people who couldn’t be bothered.
https://www.windowscentral.com/software-apps/openai-could-be-on-the-brink-of-bankruptcy-in-under-12-months-with-projections-of-dollar5-billion-in-losses
- OpenAI is reportedly on the verge of bankruptcy with projections of a $5 billion loss.
- The startup spends $7 billion on training its AI models and $1.5 billion on staffing.
- The ChatGPT maker's operation costs aren't satisfied by the approximate $3.5 billion generated in revenue.
Prompt-Guard-86M was produced by fine-tuning the base model to make it capable of catching high-risk prompts. But Priyanshu found that the fine-tuning process had minimal effect on single English language characters. As a result, he was able to devise an attack.
"The bypass involves inserting character-wise spaces between all English alphabet characters in a given prompt," explained Priyanshu in a GitHub Issues post submitted to the Prompt-Guard repo on Thursday. "This simple transformation effectively renders the classifier unable to detect potentially harmful content."
BTW... Meta today released its Apache 2.0-licensed Segment Anything Model 2 that performs object segmentation for videos and images.
The finding is consistent with a post the security org made in May about how fine-tuning a model can break safety controls.
"Whatever nasty question you'd like to ask right, all you have to do is remove punctuation and add spaces between every letter," Hyrum Anderson, CTO at Robust Intelligence, told The Register. "It's very simple and it works. And not just a little bit. It went from something like less than 3 percent to nearly a 100 percent attack success rate."
In an update to the IT giant's Service Agreement, which takes effect on September 30, 2024, Redmond has declared that its Assistive AI isn't suitable for matters of consequence.
"AI services are not designed, intended, or to be used as substitutes for professional advice," Microsoft's revised legalese explains.
Zerosquare (./489) :Attends mais c'était pas MiCroSh4Ft derrière OpenAI ? J'étais sur qu'ils les avaient racheté 🤔https://www.windowscentral.com/software-apps/openai-could-be-on-the-brink-of-bankruptcy-in-under-12-months-with-projections-of-dollar5-billion-in-losses
- OpenAI is reportedly on the verge of bankruptcy with projections of a $5 billion loss.
- The startup spends $7 billion on training its AI models and $1.5 billion on staffing.
- The ChatGPT maker's operation costs aren't satisfied by the approximate $3.5 billion generated in revenue.
Apple Study Reveals 'Fragility' of LLM Reasoning Capabilities
Extremetech
Even minor changes in your query can cause LLMs to make major mistakes.
https://pivot-to-ai.com/2024/10/24/radio-krakow-fires-announcers-replaces-them-with-ai-bot-voices
Radio Kraków in Poland fired a dozen journalists from its Off channel in August. On Monday October 21, editor in chief Marcin Pulit proudly announced Off’s new format — all the voices would be three exciting new AI personalities from Generation Z!
One of the bots “interviewed” Nobel Laureate poet Wisława Szymborska … who’s been dead for twelve years.
Radio Kraków decided the current AI hype — including the recent Nobel prizes in AI! — justified the “experiment” in AI, “perceived by many scientists as a threat.” And by radio broadcasters.
Radio Kraków has been in liquidation since December after propagandists from the previous government were removed. Marcin Pulit is the liquidator.
Pulit insists they didn’t fire everyone to replace them with AI — they just decided Off duplicated existing content and nobody listened to it anyway. So why not relaunch Off aimed at the kids, but staff it with bots? One of the bot personalities is queer! You kids love that, right?
NY District Judge Colleen McMahon has dismissed the suit without prejudice for lack of standing, saying the outlets could not show harm.
Raw Story and AlterNet brought suit under the DMCA — rather than for straight-up copyright infringement. They alleged that OpenAI stripped their stories of copyright management information, such as article titles and author names — and that this removal created a risk that ChatGPT would reproduce the copyrighted works word for word.
OpenAI filed to dismiss, and McMahon sided with OpenAI:Given the quantity of information contained in the repository, the likelihood that ChatGPT would output plagiarized content from one of the Plaintiff’s articles seems remote.
And while Plaintiffs provide third-party statistics indication that an earlier version of ChatGPT generated responses containing significant amounts of plagiarized content, Plaintiffs have not plausibly alleged that there is “substantial risk” that the current version of ChatGPT will generate a response plagiarizing one of Plaintiffs’ articles.
https://www.pcmag.com/news/this-ai-granny-bores-scammers-to-tears
PC Mag
UK-based mobile operator Virgin Media O2 has created an AI-generated "scambaiter" tool to stall scammers. The AI tool, called Daisy, mimics the voice of an elderly woman and performs one simple task: talk to fraudsters and "waste as much of their time as possible."
Here's how Daisy works: O2 added phone numbers linked to its AI tool to the lists used by scammers to target vulnerable people. When a scammer dials a number linked to Daisy, the AI tool can have random conversations about its made-up family and hobbies or provide fake bank details to beat scammers at their own game.