"Will the robots inherit the earth? Yes, but they will be our children."

Photo by Alex Knight 

You have 10 seconds to comply

At BiTE HQ we are of two minds about the imminent robot revolution. While the idea of helpful androids fulfill our childhood sci-fi fantasies, those same childhood fantasies also remind us that for every Johnny 5 there’s an Ash. Tech pundits have been speaking on the potential dangers of AI for years now; in fact the late Stephen Hawking told the BBC, “The development of full artificial intelligence could spell the end of the human race… It would take off on its own, and re-design itself at an ever increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.”


And then there’s the issue of bias. Humans, for all our faults, operate (usually) by a moral and ethical code. But no matter how hard we try, we all see through our own lenses and we all operate on our own biases. So how do we teach a machine to be fair and operate without bias? Is it even possible?

Already we’re seeing problems of bias in examples of AI used in law enforcement – problems some engineers don’t seem overly eager to correct: 

I’ve seen firsthand too many researchers who demonstrate a surprising nonchalance about the human impact. I recently attended an innovation conference just outside of Silicon Valley. One of the presentations included a doctored video of a very famous person delivering a speech that never actually took place. The manipulation of the video was completely imperceptible.

When the researcher was asked about the implications of deceptive technology, she was dismissive of the question. Her answer was essentially, “I make the technology and then leave those questions to the social scientists to work out.” This is just one of the worst examples I’ve seen from many researchers who don’t have these issues on their radars. I suppose that requiring computer scientists to double major in moral philosophy isn’t practical, but the lack of concern is striking.  – Kyle Dent, TechCrunch

Amazon (and if anyone would be on the cutting edge of scary AI it’d be the company who literally wants to create something called a “war cloud”) had to scrap it’s resume-sorting AI because the damn thing taught itself to select for male candidates in far greater numbers than female.

Elon Musk-founded OpenAI is working on an AGI that could mimic a human brain, and while that sounds like a pretty kickass origin story for a supervillain, OpenAI is explicit about their goal of making beneficial and, more importantly, safe AI. Companies are starting to address the ethics behind the technology, but we need to have the conversation as a society, too.

Humans are great. We’re smart and we create incredible things that take our breath away. But sometimes we create them so quickly and so recklessly, they outsmart us.  Hopefully we’ll figure this out…before Skynet goes online. 

Shaken and Stirred

Speaking of horrible robots…

Robocalls have gone from the irritating but innocuous sales calls about Reader’s Digest to bizarre threats that the IRS is going to literally break your kneecaps unless you immediately send $500 in iTunes gift cards to the following address. And while it’s easy to make fun of these farcical scams, not everyone is savvy enough to avoid them. Even if you are, you probably spend an appreciable part of your day blocking spoofed numbers and cursing at your phone screen.

Waiting for the FCC to take action was a fool’s errand, so attorneys general from all 50 US states and the District of Columbia are working with a dozen major telecom companies to adopt a set of “anti-robocalling principles.”

AT&T, T-Mobile, Verizon, Comcast and others are beginning to make it more difficult for these robocalls to reach you. The eight principles (which you should really read in full) include things like deeper investigations into suspicious calls, confirming the identity of commercial callers, monitoring heavy network traffic, and implementing STIR/SHAKEN call authentication.

None of these principles are legally binding, but telecom companies cooperating with state AGs and the FCC means – just maybe – we’ll see prosecutions of some of the worst robocall offenders.

This is literally the only good thing FCC Chair Ajit Pai has ever done in his life so, ya know, hooray! 

You Might Also Like…

Signal vs WhatsApp: Chat App Encryption

If you spend any amount of time on social media these days (and heaven knows I do), you’ll see people talking about encrypted chat apps. In the days following the Capitol riots, both Twitter and Facebook cracked down hard on violent speech and anything that could be considered a conspiracy theory. The result was a …

Signal vs WhatsApp: Chat App Encryption Read More »