As technology continues to impact all of our lives in increasingly new ways, artificial intelligence (AI) has quickly become a household word. Transportation is also a sector that impacts everyone, and it’s no wonder that AI and urban mobility have a lot to do with one another.
The rigorous data capture and analysis involved in new transportation technologies can help optimise routes, reduce travel and waiting times, predict weather conditions and more. Yet mobility, by definition, involves stepping out in public to get from point A and point B, making us vulnerable to potential misuse of the data collected on us in public.
Let’s walk through these tradeoffs with Alok Jain, a transport expert with 30 years of experience across Asia and the instructor of Artificial Intelligence in Urban Mobility, one of the many free, self-paced online courses offered through the EIT Urban Mobility Academy.
What does AI have to do with urban mobility anyway?
“AI is fundamentally advanced data analytics, and transport is a very data-rich environment,” says Jain. Earlier in his career, he worked with Hong Kong’s rich network of buses, which included their 4,000 vehicle fleet, 4,000 bus stops and 12,000 drivers. With those variables alone, the number of combinations and calculations amounted to nearly 200 million transactions per day. “Now you cannot put that into any spreadsheet because it would break every 10 seconds.”
That’s where artificial intelligence and algorithms come in and use complicated computations to detect patterns that previous planners hadn’t been able to discover themselves. As Jain says, “If you don’t have the data, it’s all guesswork. But now you can tell how women or old people like to travel to and from where, how many people make five or ten trips a day, how many unique passengers you have in a month… It’s like entering into a Disneyland of buses.” Yet it’s also important to understand the different types of data, their origins and value to planners, all of which (and more) are covered in one of our other free courses: The power of Mobility Data: Discover how every move matters.
With such a gargantuan amount of mobility information at hand, AI then helps planners identify new strategies to improve important factors like customer experience and maintenance, as well as fine-tune for efficiency, which often leads to attractive profit margins, too. “When I joined the bus operation in 2012,” says Jain, “we were losing $120M HKD (€14M) per annum. In 2016, after doing all of these reforms in the bus industry, using exactly the same number of buses, drivers, resources and facilities, we were making $750M HKD (€88M) profit. That’s the size of the prize. And then you realise you are sitting on a gold mine.” Unlike what tends to happen in several large tech companies that keep the data and profits for their shareholders, when this pot of gold is invested back into public transportation the entire public benefits, not just a handful of investors.
Sometimes AI reinforces what’s desperately missing from the data
When data is front and centre, it’s also all the more important that datasets aren’t biased, as researchers like Dr. Joy Buolamwini pointed out in the documentary Coded Bias and her 2023 book Unmasking AI, or else they can reinforce biases in algorithms. For example, Amazon’s AI recruiting tool was eliminated when it showed a bias against women because women were not sufficiently represented in the input data. Similarly, one example that frequently comes up in mobility discussions is how women tend to ride and operate public transportation less than men because, as Jain says, “the entire industry has been designed for males.”
To close the gender divide in their Hong Kong bus dataset, Jain and his team took the initiative to recruit women drivers, and the data later revealed they were a key source of potential. “Women drivers had fewer customer complaints, they were more punctual, they took less leave of absence.” Yet focus groups also revealed that it was difficult for women to work the same schedules as men, which is why Jain and his team designed shifts to be compatible with family obligations. “With the whole world now complaining about the shortage of drivers, this is really where the value proposition is.”
Not all data can be treated the same
Unlike data on variables like delay times and vehicle occupancy, data on your identity triggers privacy concerns. AI that involves biometric data, which can range from our birthdays to our own faces, is especially of concern because, as Stephanie Hare says in her book Technology is Not Neutral: A Short Guide to Technology Ethics, “Our biometrics cannot be reset, unlike a username or a password. That is a problem because they can be stolen and gathered without our consent or knowledge.”
Whereas some people tend to divide the data into public and private, Jain makes a different kind of data distinction for AI: Personalised data and non-personalised data. The first “should be handled with extreme care,” says Jain, as it refers to said biometric data, home location and more. “There should be processes and audits: Who can access it? Who has the authorization? Who has logs? Are you willing to share that data with the police? You have to draw very clear boundaries about how that data could and would be used.”
The second category is non-personalised data, or what Jain likes to call “data of public interest” because it can be used for civil planning, facility and mobility design. “Travel patterns, bus movements, where the buses are at what point of time, what kind of loading they have.” Furthermore, this data must be made anonymous, with open access to avoid abuse from politicisation. “The moment you democratise this data, people can see what is happening. A person can say ‘hey, the bus route should be put here because that’s where people are travelling.’”
The fine line between “surprise and delight” and surveillance
With so much data in the world around us, the intent behind data collection and execution is another important factor to consider for AI, and Jain offers a couple of examples for comparison. In the case of vehicle occupancy, Jain highlights the importance of choosing measurements that fall into the non-personalised, anonymous type of data instead of personal data. “You just know how full the train is or how full the bus is. So if you are using non-personalised technologies, like infrared(IR)-based sensors, you are just getting a thermal footprint of a person,” as opposed to cameras that capture faces.
Things can get trickier in the case of AI tools to measure factors like behaviour. Our AI and urban mobility course features an example of facial recognition to detect signs of driver fatigue or disturbances, which could lead to accidents. Jain explains that one of the primary intents of this initiative was driver safety. “In my conversation to the unions, I said ‘we are not trying to intrude. We are trying to improve working conditions.’ We didn’t want them to make a mistake and jeopardise their own safety.” Since the implementation of these technologies was mandatory, Jain’s team made several guarantees to drivers, namely that they wouldn’t lose their jobs and all profits from this AI implementation would be passed along as bonuses to the drivers. The initiative led to a jump from 70% to 90% departure and arrival reliability, and “not a single driver was ever penalised for not doing what they were supposed to do.”
Jain points out that some data should indeed be voluntary, and again, intent plays a key role in drawing boundaries over data use. In the case of providing one’s birthday or sharing one’s location for perks like free bus rides or discounts on coffee or snacks in the case of delays, such as what Axon Vibe does in Switzerland, this data should not be saved so that it is not later used to manipulate or spam travellers. “We are not recording this data; we just detect it for 300 milliseconds.” This idea also extends to giving travellers control over what happens with their data afterward. “If you don’t want a company to use a certain record, you can just go delete the record, so there’s full freedom for the customers.”
Trust and accountability to reinforce the public good
It’s not always easy to know upfront if traveller data will be misused by AI, which is why Jain says the proof must be in consistent delivery of trustworthy service on the part of the transport provider. “Trust is only developed over time; it’s not a transaction. Trust is an experience, and that is the key here.”
How to establish that trust in the first place? Jain walks us through the 4E model for an ethical implementation of AI tools in transportation:
- Engineer: Develop unbiased and effective ways of measuring and analysing data.
- Educate: Transparency on data usage and deployment and public forums for discussion.
- Enforce: Strict enforcement for those who nefariously breach established boundaries.
- Evaluate: Control and monitor whether or not the policies are working and adjust.
Ultimately, this is where independent advisors and the greater civil society must come to the table. Many cities have established councils, such as the Public Transport Council (PTC) in Singapore and the Transport Advisory Council (TAC) in Hong Kong, made up of members from academia, law, politics and beyond to get into the weeds of regulations and hold city planners and tech partners accountable for improving service for the public good. “Where a common person doesn’t understand the fine print, this group of people goes through the legalese and terms and conditions with a fine-toothed comb, and then they give a green signal whether that can be used or not used.”
As Jain says, “transportation impacts everyone,” and only with measures like these in place can we rest and travel easily knowing that data about our movements in public and artificial intelligence are being used for the public good.
Dive deeper into the insights shared by Alok Jain and other experts featured in our blog post. Gain comprehensive knowledge, practical skills, and ethical perspectives on AI’s impact on transportation in our exclusive portfolio of urban mobility courses.