Culture/Movies

Here’s how artificial intelligence has evolved in the 19 years between Alita: Battle Angel’s manga and its film adaptation

A look back on what the major shifts in technology mean for our Alita-inspired futures.

February 6, 2019
Sponsored by:

If it exists at all, the line between present and future feels as though it’s growing increasingly thin and indistinct. Month in and month out, radical advancements in technology and artificial intelligence reshape not just our routines, but our perceptions of reality. Even if we can’t access these burgeoning technologies—let’s be real, not all of us have money for a self-driving vehicle—we know they’re out there, and that they stand to change how we operate in the world. Is the driver next to us steering, or is their car? Is my friend’s smart home device listening to us plot the Toronto Raptors’ 2019 championship run? Is that bag of chips at the grocery store staring back at me?

Most of these developments begin as fragments: a fantasy or an idea, not quite real enough for the constraints of our current world. But, slowly, steadily, these ideas become reality. James Cameron’s upcoming manga-based futuristic action epic, Alita: Battle Angel, has followed a similar trajectory. The film has been in the works since 2000, and with its impending theatrical release, the 19-year wait is almost over.

Based on Japanese manga artist Yukito Kishiro’s 1990 cyberpunk series Alita: Battle Angel  (known in Japanese as Gunnm), Cameron’s live-action rendering follows the titular Alita (Rosa Salazar), a scrapped cyborg found in a junkyard in post-apocalyptic Iron City and revived by Dr. Dyson Ido (Christoph Waltz). Alita can’t remember her past, but it’s clear that unravelling it will reveal secrets to her identity and the significance of her existence. Along the way, she’ll probably kick some serious robot butt, too; she competes in a combat sport called Motorball—so expect some off-the-walls, bonkers battle scenes!

Alita: Battle Angel is more than just a jaw-dropping piece of dystopian futurism, though. It digs into the same things that we deal with every day now: how does our past define our future? How does technology empower us—and more importantly, how does it control us? We’re not cyborgs yet, though an argument could be made that our cell phones are basically synthetic, AI-powered appendages. But since Alita began its journey to the big screen at the turn of the millennium, a lot has changed. Let’s look back on the major shifts in tech and artificial intelligence that have transpired over the past two decades—and what they mean for our Alita-inspired futures.

2000-2005

These were, for the most part, blessedly simple years that preceded and precipitated the accelerated, constant tech boom that characterizes life in 2019. These were the halcyon days of “novelty tech”—new machines that were fun, or whimsical, or slightly helpful. Critical assessments of these advancements were unpopular, mostly relegated to academic circles. Ignorance was bliss, baby!

  • In the year 2000, manyus got our first dog. But it wasn’t furry—it was plastic and full of computer chips. Robopets like Poo-Chi hit the market, bringing to fruition a centuries-long toil to find a pet that won’t crap on the carpet. No expensive trips to the vet was a plus, too.
  • Remember Tom Haverford’s DJ Roomba? Its origins go back to 2002, when the first Roomba hit the market. The robotic vacuum took care of your chores for you, and knew how to avoid obstacles along the way. This is objectively Good Tech.
  • Two of NASA’s rovers began exploring Mars on their own in 2004—no adult supervision needed. Now, we could get robots to explore where we couldn’t. Another example of Good Tech: Tech that, generally, can’t be applied for suspect or immoral gains. Also, the then-highest-resolution colour photo ever taken on another planet came from the first rover, Spirit.
  • In 2005, the digital marketing algorithm’s grandparent was created with “recommendation technology,” which catalogued users’ web and media consumption patterns to produce ads for similar suggestions.
  • This is, some might say, an origin story for Bad Tech—among the first examples of 21st century manipulation of computer technology for purely profit-driven marketing purposes. If only we could go back… Alas. (Right on cue, YouTube is born in 2005.)

2006-2010

This is when things start to get messier, when tech begins to snowball. If the first five years were filled with frivolous, mostly untroubled leisure-tech, the next five would see a shift: slowly transforming leisure into capital, monetizing the gadgets and codes and programs that began to perforate and characterize pop culture.

  • Twitter is created in 2006. It now counts around 335 million active users. The platform began as a social media site for bite-sized thoughts, jokes, and observations, but now it’s mostly for solidarity-building via nihilism.
  • In 2007, YouTube rolled out in-video advertisements, and Facebook introduced its ads programming, which allowed businesses to target ads based on consumer data. These social platforms  were no longer hallmarks of leisure-media—they were developing into ad-driven juggernauts.
  • Spotify officially launches in 2008, rising to challenge iTunes’ monopoly on the digital music market with a “freemium” model that provided baseline features for free, and sleeker, better, ad-free services for a subscription cost. For both listeners and artists, Spotify will change the face of music and digital consumption forever.
  • Google began developing its first self-driving car in 2009. Two years later, Nevada would make it legal for Google’s driverless AI to take to public roads. It even passed a road test in 2012. Now, there are thousands of cars on the road with self-driving technology. Last March, a self-driving Uber in Arizona struck and killed a pedestrian in the first AI-driven car fatality.

2011-2015

The developments of the previous five years kick into high gear as the information economy explodes, and digitized consumer data rises to become one of the most valuable and sought-after commodities. iPhones started coming equipped with a sometimes-informative, often-aloof AI sidekick named Siri, your Xbox 360 started tracking your body movement, and IBM’s Watson AI defeated two human competitors on the TV show Jeopardy! We gotta step our game up if we want anymore of that sweet, sweet Jeopardy! money, y’all.

  • In 2012, researchers at Stanford and Google present information showing that their AI system had become quite efficient in identifying different pictures of… cats. Naturally, this was hailed as an important step in building an artificial brain. We should have known cats would play a critical role in the development of human-like robotics.
  • Social media like Twitter is turned into one of the most powerful tools in grassroots activism when activists during the 2012 Arab Spring used the platform to disseminate images of the chaos during a media blackout. A few years later, it would be used to similar success in documenting state brutality against Black Americans, with on-the-ground images and reporting going viral. In this way, tech provided direct lines between producers and consumers, without a conglomerate media apparatus controlling the narrative.
  • Over 3000 researchers and engineers—including Elon Musk and Stephen Hawking—draft and sign an open letter calling for a ban on AI-controlled autonomous weapons, which they posit would be a third major development in global warfare after gunpowder and nuclear arms. We have a feeling Alita would have signed this letter, too.

2016-2019

Alright, now we’re in the endgame. In the three years leading up to the release of Alita: Battle Angel, AI and tech continue to morph in ways that seemed impossible just 10 years ago. Just like Alita, we’re wrestling with the implications of this continuity—and trying to figure out if it’s good or bad for us, after all.

  • Facial recognition software is on the rise, with mixed results. Music festivals are toying with the technology, purportedly as a way to locate criminals on festival grounds. But it’s a slippery slope: in 2016 in Maryland, the technology was used to identify and arrest protesters after the police killing of Freddie Gray. Opponents of the software argue it will be used to unfairly surveil and control citizens—especially in marginalized communities.
  • A language processing AI outscored an elite group of humans in a reading and comprehension test. Take THAT, dummies.
  • Self-driving cars are getting better and more popular—and they might soon be equipped with software to determine who will die in a fatal crash scenario. Studies have already been held posing dilemmas about how vehicle AI should handle these collisions: Should the car continue straight and hit a pregnant woman, or swerve to avoid her and hit three people on the corner? These bizarre and hellish considerations come with this new territory.
  • Last year, almost 40 million Americans owned a smart home speaker system, like Amazon’s friendly Alexa. But critics are worried that the systems are surveilling their owners at all times, prompted by stories like this one, where an Alexa user accidentally received 1700 audio files of a stranger talking to their Alexa. Alexa can tell us the weather while we’re cooking dinner, but is it listening—and maybe even identifying and sorting voices—when we don’t want it to?

Since 2000, our world has changed at breakneck pace, each year faster and more dizzying than the last. New paradigm-shifting steps in tech and AI seem to come each week, and while many of them are pretty cool—who doesn’t want a smart fridge?—we need to be aware of the flip side of the convenience coin. It can get pretty ugly, but it doesn’t have to. We’re not cyborgs yet, but if we do eventually wind up with computer chips in our heads, may we also be as kickass as Alita.

Exclusive videos, interviews, contests & more.

sign up for the a.side newsletter

sign up