Whistleblower allegation: Harvard muzzled disinfo team after $500 million Zuckerberg donation

By FRANK BAJAK

Dec. 4, 2023

A prominent disinformation scholar who left Harvard University in August has accused the school of muzzling her speech and stifling — then dismantling — her research team as it launched a deep dive in late 2021 into a trove of Facebook files she considers the most important documents in internet history.

The actions impacting Joan Donovan’s work coincided with a $500 million donation to Harvard by a foundation run by Facebook founder Mark Zuckerberg and his wife Priscilla Chan. In a whistleblower disclosure made public Monday, Donovan seeks investigations into “inappropriate influence” from Harvard’s general counsel, the Massachusetts attorney general’s office and the U.S. Department of Education.

The CEO of Whisteblower Aid, a legal nonprofit supporting Donovan, called the alleged behavior by Harvard’s Kennedy School and its dean a “shocking betrayal” of academic integrity at the elite school.

“Whether Harvard acted at the company’s direction or took the initiative on their own to protect (Facebook’s) interests, the outcome is the same: corporate interests are undermining research and academic freedom to the detriment of the public,” CEO Libby Liu said in a press statement.

MORE

Pentagon’s AI initiatives accelerate hard decisions on lethal autonomous weapons

By FRANK BAJAK

Nov. 25, 2023

NATIONAL HARBOR, Md. (AP) — Artificial intelligence employed by the U.S. military has piloted pint-sized surveillance drones in special operations forces’ missions and helped Ukraine in its war against Russia. It tracks soldiers’ fitness, predicts when Air Force planes need maintenance and helps keep tabs on rivals in space.

Now, the Pentagon is intent on fielding multiple thousands of relatively inexpensive, expendable AI-enabled autonomous vehicles by 2026 to keep pace with China. The ambitious initiative — dubbed Replicator — seeks to “galvanize progress in the too-slow shift of U.S. military innovation to leverage platforms that are small, smart, cheap, and many,” Deputy Secretary of Defense Kathleen Hicks said in August.

While its funding is uncertain and details vague, Replicator is expected to accelerate hard decisions on what AI tech is mature and trustworthy enough to deploy – including on weaponized systems.

There is little dispute among scientists, industry experts and Pentagon officials that the U.S. will within the next few years have fully autonomous lethal weapons. And though officials insist humans will always be in control, experts say advances in data-processing speed and machine-to-machine communications will inevitably relegate people to supervisory roles.

That’s especially true if, as expected, lethal weapons are deployed en masse in drone swarms. Many countries are working on them — and neither China, Russia, Iran, India or Pakistan have signed a U.S.-initiated pledge to use military AI responsibly.

MORE

Insider Q&A: Pentagon AI chief on network-centric warfare, generative AI challenges

Nov. 20, 2023

By FRANK BAJAK

The Pentagon’s chief digital and artificial intelligence offer, Craig Martell, is alarmed by the potential for generative artificial intelligence systems like ChatGPT to deceive and sow disinformation. His talk on the technology at the DefCon hacker convention in August was a huge hit. But he’s anything but sour on reliable AI.

Not a soldier but a data scientist, Martell headed machine-learning at companies including LinkedIn, Dropbox and Lyft before taking the job last year.

Marshalling the U.S. military’s data and determining what AI is trustworthy enough to take into battle is a big challenge in an increasingly unstable world where multiple countries are racing to develop lethal autonomous weapons.

The interview has been edited for length and clarity.

Q: How should we think about AI use in military applications?

A: All AI is, really, is counting the past to predict the future. I don’t actually think the modern wave of AI is any different.

Q: Pentagon planners say the China threat makes AI development urgent. Is China winning the AI arms race?

A: I find that metaphor somewhat flawed. When we had a nuclear arms race it was with a monolithic technology. AI is not that. Nor is it a Pandora’s box. It’s a set of technologies we apply on a case-by-base basis, verifying empirically whether it’s effective or not.

Q: The U.S. military is using AI tech to assist Ukraine. How are you helping?

A: Our team is not involved with Ukraine other than to help build a database for how allies provide assistance. It’s called Skyblue. We’re just helping make sure that stays organized.

Q: There is much discussion about autonomous lethal weaponry – like attack drones. The consensus is humans will ultimately be reduced to a supervisory role — being able to abort missions but mostly not interfering. Sound right?

A: In the military we train with a technology until we develop a justified confidence. We understand the limits of a system, know when it works and when it might not. How does this map to autonomous systems? Take my car. I trust the adaptive cruise control on it. The technology that is supposed to keep it from changing lanes, on the other hand, is terrible. So I don’t have justified confidence in that system and don’t use it. Extrapolate that to the military.

Q: The Air Force’s “loyal wingman” program in development would have drones fly in tandem with fighter jets flown by humans. Is the computer vision good enough to distinguish friend from foe?

A: Computer vision has made amazing strides in the past 10 years. Whether it’s useful in a particular situation is an empirical question. We need to determine the precision we are willing to accept for the use case and build against that criteria – and test. So we can’t generalize. I would really like us to stop talking about the technology as a monolith and talk instead about the capabilities we want.

MORE

Leading Egyptian opposition politician targeted with spyware, researchers find

Sept. 24, 2023

By FRANK BAJAK

BOSTON (AP) — A leading Egyptian opposition politician was targeted with spyware multiple times after announcing a presidential bid — including with malware that automatically infects smartphones, security researchers have found. They say Egyptian authorities were likely behind the attempted hacks.

Discovery of the malware last week by researchers at Citizen Lab and Google’s Threat Analysis Group prompted Apple to rush out operating system updates for iPhones, iPads, Mac computers and Apple Watches to patch the associated vulnerabilities.

Citizen Lab said in a blog post that attempts beginning in August to hack former Egpytian lawmaker Ahmed Altantawy involved configuring his phone’s connection to the Vodaphone Egypt mobile network to automatically infect it with Predator spyware if he visited certain websites not using the secure HTTPS protocol.

Citizen Lab said the effort likely failed because Altantawy had his phone in “lockdown mode,” which Apple recommends for iPhone users at high risk, including rights activists, journalists and political dissidents in countries like Egypt.

Prior to that, Citizen Lab said, attempts were made beginning in May to hack Altantawy’s phone with Predator via links in SMS and WhatsApp messages that he would have had to click on to become infected.

Once infected, the Predator spyware turns a smartphone into a remote eavesdropping device and lets the attacker siphon off data.

Given that Egypt is a known customer of Predator’s maker, Cytrox, and the spyware was delivered via network injection from Egyptian soil, Citizen Lab said it had “high confidence” Egypt’s government was behind the attack

MORE

Book Review: Novelist and blogger Cory Doctorow pens a manual for destroying Big Tech

Sept. 12, 2023

By FRANK BAJAK

As a leading blogger in the pre-Substack era, novelist and public-interest technologist Cory Doctorow often warned that Big Tech was rendering of cyberspace a polluted, dystopian, crassly commercial and often hostile world of limited options.

Now it’s happened. Facebook, Instagram and other walled fiefdoms of surveillance capitalism distract discourse with scrolls of targeted ads and trending video reels. More genteel competitors were long ago muscled out.

Hateful trolls, violent speech and addictive algorithms thrive. And when a user account is mistakenly or unjustly shuttered, platform automation means the aggrieved will encounter callous indifference. It’s gotten to where anti-Big Tech initiatives enjoy bipartisan backing in an otherwise teetering U.S. democracy.

“There is no fixing Big Tech,” Doctorow, who blogged for years on the website “Boing Boing,” writes in his new book “The Internet Con: How To Seize The Means of Computation.” The breezily written 173-page manifesto is for people who want to destroy it.

MORE

Don’t expect quick fixes in ‘red-teaming’ of AI models. Security was an afterthought

By FRANK BAJAK

Aug. 13, 2023

BOSTON (AP) — White House officials concerned by AI chatbots’ potential for societal harm and the Silicon Valley powerhouses rushing them to market are heavily invested in a three-day competition ending Sunday at the DefCon hacker convention in Las Vegas.

Some 2,200 competitors tapped on laptops seeking to expose flaws in eight leading large-language models representative of technology’s next big thing. But don’t expect quick results from this first-ever independent “red-teaming” of multiple models.

Findings won’t be made public until about February. And even then, fixing flaws in these digital constructs — whose inner workings are neither wholly trustworthy nor fully fathomed even by their creators — will take time and millions of dollars.

Current AI models are simply too unwieldy, brittle and malleable, academic and corporate research shows. Security was an afterthought in their training as data scientists amassed breathtakingly complex collections of images and text. They are prone to racial and cultural biases, and easily manipulated.

MORE

Ransomware criminals are dumping kids’ private files online after school hacks

BY FRANK BAJAK, HEATHER HOLLINGSWORTH AND LARRY FENN

July 5, 2023

The confidential documents stolen from schools and dumped online by ransomware gangs are raw, intimate and graphic. They describe student sexual assaults, psychiatric hospitalizations, abusive parents, truancy — even suicide attempts.

“Please do something,” begged a student in one leaked file, recalling the trauma of continually bumping into an ex-abuser at a school in Minneapolis. Other victims talked about wetting the bed or crying themselves to sleep.

Complete sexual assault case folios containing these details were among more than 300,000 files dumped online in March after the 36,000-student Minneapolis Public Schools refused to pay a $1 million ransom. Other exposed data included medical records, discrimination complaints, Social Security numbers and contact information of district employees.

Rich in digitized data, the nation’s schools are prime targets for far-flung criminal hackers, who are assiduously locating and scooping up sensitive files that not long ago were committed to paper in locked cabinets. “In this case, everybody has a key,” said cybersecurity expert Ian Coldwater, whose son attends a Minneapolis high school.

MORE

Microsoft admits Outlook, cloud platform disruptions were cyberattacks

By FRANK BAJAK

June 17, 2023

BOSTON (AP) — In early June, sporadic but serious service disruptions plagued Microsoft’s flagship office suite — including the Outlook email and OneDrive file-sharing apps — and cloud computing platform. A shadowy hacktivist group claimed responsibility, saying it flooded the sites with junk traffic in distributed denial-of-service attacks.

Initially reticent to name the cause, Microsoft has now disclosed that DDoS attacks by the murky upstart were indeed to blame.

But the software giant has offered few details — and did not immediately comment on how many customers were affected and whether the impact was global. A spokeswoman confirmed that the group that calls itself Anonymous Sudan was behind the attacks. It claimed responsibility on its Telegram social media channel at the time. Some security researchers believe the group to be Russian.

Microsoft’s explanation in a blog post Friday evening followed a request by The Associated Press two days earlier. Slim on details, the post said the attacks “temporarily impacted availability” of some services. It said the attackers were focused on “disruption and publicity” and likely used rented cloud infrastructure and virtual private networks to bombard Microsoft servers from so-called botnets of zombie computers around the globe.

MORE

Musk deputy’s words on Starlink ‘weaponization’ vex Ukraine

By FRANK BAJAK

Feb. 9, 2023

BOSTON (AP) — Ukrainians reacted Thursday with puzzlement and some ire to comments by a top Starlink official that their country has “weaponized” the satellite internet service, which has been pivotal to their national survival.

President Gwynne Shotwell of SpaceX, which runs Starlink, was also reported to have said at the same venue Wednesday that the Elon Musk-controlled company has taken unspecified action to prevent Ukraine’s military from using Starlink technology against Russian invaders.

The network of low-orbiting satellites has been crucial to Ukraine’s use of battlefield drones — a central fixture of the year-old war — and the country’s defenders have no viable alternative. The satellite links help Ukrainian fighters locate the enemy and target long-range artillery strikes.

Onstage at a conference in Washington, D.C., Shotwell said: “We were really pleased to be able to provide Ukraine connectivity and help them in their fight for freedom. It was never intended to be weaponized. However, Ukrainians have leveraged it in ways that were unintentional and not part of any agreement.”

MORE

Drone advances in Ukraine could augur dawn of killer robots

By FRANK BAJAK and HANNA ARHIROVA
January 3, 2023

KYIV, Ukraine (AP) — Drone advances in Ukraine have accelerated a long-anticipated technology trend that could soon bring the world’s first fully autonomous fighting robots to the battlefield, inaugurating a new age of warfare.

The longer the war lasts, the more likely it becomes that drones will be used to identify, select and attack targets without help from humans, according to military analysts, combatants and artificial intelligence researchers.

That would mark a revolution in military technology as profound as the introduction of the machine gun. Ukraine already has semi-autonomous attack drones and counter-drone weapons endowed with AI. Russia also claims to possess AI weaponry, though the claims are unproven. But there are no confirmed instances of a nation putting into combat robots that have killed entirely on their own.

Experts say it may be only a matter of time before either Russia or Ukraine, or both, deploy them.

MORE