Ai


Oct. 16, 2020

Hacked Billboards Can Make Teslas See ‘Phantom Objects,’ Causing Them to Swerve or Stop Abruptly

Hacked Billboards Can Make Teslas See ‘Phantom Objects,’ Causing Them to Swerve or Stop Abruptly

Tesla’s Autopilot system relies on vision rather than LIDAR, which means it can be tricked by messages on billboards and projections created by hackers. Security researchers have demonstrated how Tesla’s Autopilot driver-assistance systems can be tricked into changing speed, swerving or stopping abruptly, simply by projecting fake road signs or virtual objects in front of them. Their hacks worked on both a Tesla running HW3, which is the latest version of the company’s Autopilot driver-assistance system, and the previous generation, HW2.5.

Sep. 4, 2019

Scammer Successfully Deepfaked CEO’s Voice To Fool Underling Into Transferring $243,000

Scammer Successfully Deepfaked CEO’s Voice To Fool Underling Into Transferring $243,000

The CEO of an energy firm based in the UK thought he was following his boss’s urgent orders in March when he transferred funds to a third-party. But the request actually came from the AI-assisted voice of a fraudster. The Wall Street Journal reports that the mark believed he was speaking to the CEO of his businesses’ parent company based in Germany.

Nov. 19, 2018

The dark side of YouTube

The dark side of YouTube

The YouTube algorithm that I helped build in 2011 still recommends the flat earth theory by the hundreds of millions. This investigation by @RawStory shows some of the real-life consequences of this badly designed AI.

Source: threader.app

Sep. 9, 2018

IBM secretly used New York’s CCTV cameras to train its surveillance software

IBM secretly used New York’s CCTV cameras to train its surveillance software

New technology is making surveillance cameras more powerful than ever. Such systems are often developed away from the public eye, as detailed in a new report showing how IBM worked with the NYPD to create software that could search CCTV footage for individuals based on their skin tone. Features like searching for individuals based on age, gender, and skin tone IBM’s software was reportedly developed and tested on surveillance cameras run through the Lower Manhattan Security Initiative (pictured).

Jun. 6, 2018

Blocking facial recognition surveillance using AI

Blocking facial recognition surveillance using AI

If Artificial Intelligence (AI) is increasingly able to recognise and classify faces, then perhaps the only way to counter this creeping surveillance is to use another AI to defeat it. We’re in the early years of AI-powered image and face recognition but already researchers at the University of Toronto have come up with a way that this might be possible. The principal at the heart of this technique is adversarial training, in which a neural AI network’s image recognition is disrupted by a second trained to understand how it works.

May. 31, 2018

Automated Facial Recognition: Menace, Farce or Both?

Automated Facial Recognition: Menace, Farce or Both?

In its letter, the ACLU argues that Amazon, which has in the past opposed secret government surveillance, should not be in the business of selling AFR technology that the company claims can “identify people in real-time by instantaneously searching databases containing tens of millions of faces.” Further, the ACLU insists, Rekognition’s capability to track “persons of interest,” coupled with its other features which “read like a user manual for authoritarian surveillance,” lends itself to the violation and abuse of individuals’ civil rights. Amazon naturally disagrees.

May. 28, 2018

China is exporting facial recognition software to Africa

China is exporting facial recognition software to Africa

For all the promise it holds for the future, artificial intelligence is still guilty of historic bias. Voice recognition software struggles with English accents that are not American or British and facial recognition can be guilty of racial profiling. As this technology increasingly outpaces human discourse on race, China seems to be getting ahead on recognizing a diverse range of faces across the wider world, despite its own struggles with racial insensitivity.

May. 15, 2018

Hiding Information in Plain Text

Hiding Information in Plain Text

Computer scientists have now invented a way to hide secret messages in ordinary text by imperceptibly changing the shapes of letters. The new technique, named FontCode, works with common font families such as Times Roman and Helvetica. It is compatible with most word-processing software, including Microsoft Word, as well as image-editing and drawing programs, such as Adobe Photoshop and Adobe Illustrator.

FontCode embeds data into texts using minute perturbations to components of letters. This includes changing the width of strokes, adjusting the height of ascenders and descenders, and tightening or loosening the curves in serifs and the bowls of letters such as o, p, and b. A kind of artificial-intelligence system known as a convolutional neural network can recognize these perturbations and help recover the embedded messages. The amount of information FontCode can hide is limited only by the number of letters on which it acts, the researchers say.

May. 9, 2018

Uhh, Google Assistant Impersonating a Human on the Phone Is Scary as Hell to Me

Uhh, Google Assistant Impersonating a Human on the Phone Is Scary as Hell to Me

The near future terror of this project has to do with how it could be used to further erode your privacy and security. Google has access to a lot of your information. It knows everything you browse on Chrome, and places you go on Google Maps.

If you’ve got an Android device it knows who you call. If you use Gmail it knows how regularly you skip chain emails from your mom. Giving an AI that pretends to be human access to all that information should terrify you.

May. 8, 2018

UK police say 92-percent false positive facial recognition is no big deal

UK police say 92-percent false positive facial recognition is no big deal

New data about the South Wales Police’s use of the technology obtained by Wired UK and The Guardian through a public records request shows that of the 2,470 alerts from the facial recognition system, 2,297 were false positives. In other words, nine out of 10 times, the system erroneously flagged someone as being suspicious or worthy of arrest.

May. 6, 2018

Ticketmaster to trial facial recognition technology at live venues

Ticketmaster to trial facial recognition technology at live venues

The company suggested a number of other ways it may use the technology, including to serve more personalized offers and product tie-ins while attendees move around the venue. It would also allow for “development of deeper customer relationships” between fans, artists, venues, and teams. Moreover, Ticketmaster touts the technology as boosting safety and security, as it allows venues to know exactly who is in attendance — though an e-ticket tied to an individual’s mobile device would presumably offer a similar benefit.

Apr. 27, 2018

Police Body Cameras Could Get Facial Recognition Technology

Police Body Cameras Could Get Facial Recognition Technology

More than 40 civil rights, technology, media and privacy groups have voiced their concerns about the police body cameras in a letter to the AI Ethics Board, which includes groups like the American Civil Liberties Union, and the NAACP.

Source: fortune.com

Apr. 26, 2018

Nukes in the Age of AI

Nukes in the Age of AI

In 1983, Soviet Lieutenant Colonel Stanislav Petrov sat in a bunker in Moscow watching monitors and waiting for an attack from the US. If he saw one, he would report it up the chain and Russia would retaliate with nuclear hellfire. One September night, the monitors warned him that missiles were headed to Moscow.

But Petrov hesitated. He thought it might have been a false alarm.

Apr. 18, 2018

This Is the Facial Recognition Tool at the Heart of a Class Action Suit Against Facebook

This Is the Facial Recognition Tool at the Heart of a Class Action Suit Against Facebook

As Reuters reports, the lawsuit alleges that Facebook improperly collected and stored users’ biometric data. It was originally filed in 2015 by Facebook users in Illinois, which passed the Biometric Information Privacy Act (BIPA) in 2008. The law regulates the collection and storage of biometric data, and requires that a company receive an individual’s consent before it obtains their information.