THE REAL THREATS OF AI (PART - II)
Glad you are with me and I hope that you went through the first part. In the first part, we introduced AI naively and went through some common fears of AI and why they might be far-fetched. So in this part, we will talk about the real threats of AI and its consequences. Let’s dive in.
CONSOLIDATION OF AI DEVELOPMENT:
Your AI is as good as the data it is trained on. Data collection, refinement, pre-processing and cleaning takes the major portion of AI projects timeline. Consumer-facing AI’s need data from a large pool of the population and needs to be diversified which would make our algorithm generalizable. So who has access to such a large user base? Certainly, the big tech. People’s social and business life depends on these applications which are free to get into. I will reiterate the famous line “If you're not paying for the product, then you are the product”. Platforms make it easy for you to get into their applications and due to intense competition, they slash prices and look for other sources of revenue. Targetted advertisement and selling data to the third party turns out to be lucrative. It is a win-win situation, advertisement bidders can micro-target their target audience with better deals and the users can use services at a lower price and on top of that can get well-curated ads and deals. So the big tech hoard a copious amount of data and they can have better models. These data are generally proprietary hence can’t be used by the general public. Also big tech, unlike start-ups, can burn through cash and throw bigger models and a large amount of data to solve a problem. They certainly have the computational power to get better accuracy. No doubt these companies have talented and diligent engineers who are good at hand engineering still, it is difficult to have reproducible research results published by these companies.
Now just think about, a group of about 1000 white males in the basement of Facebook headquarters decided what other 2 billion people of the world would be recommended in their feed. It highlights the representation problem in these companies as the algorithm incorporates limited values. Only a handful of people call the shots which will affect everyone else. Such provincial values can lead to curtailing free speech and too much power in the hands of few. Also when automation precipitates job losses, you won’t be invited to the table to negotiate the price while enforcing UBI. The financial motive for UBI is corporations which are earning from automation would share the profit with citizens which would indeed help corporations to sell their products. So they will always recoup their losses from producing with automation, which would be cheap and selling at a profit to people. Also don’t expect Apple to share their profits with an Indian worker working in Apple factory in India who lost his job to automation. It is difficult to define universal and basic in UBI as it varies from place to place. So we would need global governance which can have uniform rules for all and to solve consolidation, we need more democratization of AI techniques so that everyone can access the knowledge and build upon it. Democratization can be achieved by setting up research labs by government institutions which prioritize resource and knowledge sharing over profits. This would lead to better representation and community awareness about these topics.
TOO MUCH TRUST ON THE BLACK BOX
For many practitioners, AI algorithms are black boxes. We just see the post magic results and never really understand why the algorithm took certain decisions. In 2015, an image classification system by Google erroneously identified two African American humans as gorillas, raising concerns of racial discrimination. So a blind trust on a black box is not sustainable in the long term, we need to unravel and get a perspicacious understanding of neural networks. A deeper understanding would be exacting, as neural networks are similar to our brain structure, building up connections in the scale of billions. Max Tegmark, an MIT professor on Lex Fridman podcast talked about how AI can help in physics if we can understand what’s happening under the hood. Suppose you have data on Force (F), Mass (M) and acceleration. With enough data, the AI can come up with the relation F = MA. Understanding these mathematical equations would help researchers to make sense of the physical world without following the previous arduous heuristic process of coming up with these complicated relations.
So we need to make sense of the algorithm before it goes haywire, it is like not installing fire alarm and sprinkler system in your house because chances of getting your house on fire are thin and you have the immense trust of not getting on fire. In Full-Self driving technology, could an AI understand the trolley problem? What are the edge cases? As we move towards singularity and we want to have a salubrious symbiosis with AI, we need to understand what’s happening under the hood so that we can inculcate our human ethical values into the design of AIs.
PROPAGANDA MACHINE
Our ancestors lived together in tribes and community and it behoved for all the members to take care of each other. So humans built an evolutionary pathway of forming groups and supporting people in the group, also activities like agriculture and hunting was a collective effort. So people started looking inwards and this slowly grew into Us and Them mentality. In 1900s propaganda spread through mass demonstration, charismatic leaders and social movements. Anyone just cannot start a movement and have an impact, rather the groups grew over time slowly. The medium of communication was unfeasible sometimes, as you have to gather people and make them aware of the group agenda in person or through limited available TV and radio channels which were pricey. Fact-checking was not possible due to limited information availability. But with the advent of social media, did something change? Anyone can start a movement, like QAnon. It is a discredited far-right wing cabal that portrayed Donald Trump to be a messiah. It was started by an anonymous person named Q on 4chan platform. You can be part of the group by simply clicking on the “Follow” button. The medium of communication travels at the speed of light, with such a vast network of optical fibres connecting our globe, information can travel to other parts of the world in no time. Internet connection is faster and cheaper than it ever was. Fact-checking is exacting due to the copious amount of information available at our fingertips.
The backbone of the social media platform is the recommender system that curates your feed as per your interests and things you care about. We discussed the ad-revenue business model of social media which make these products accessible. The recommender system main objective is to get the users to stay on the platform as long as possible. This increases the chances of click-through rates on advertisement links. The recommender systems (also can be referred to as an AI) learnt that people are curious about trending topics globally and locally, what their peers and people they admire are talking about. Study shows that fake news spread faster by utilizing the power law of social media, as it grabs more eyeballs. We tend to share things that satisfy our confirmation bias, where we look for things that we want not what we need. So the AI algorithm implicitly figured out that fake news can get us to stay longer on their platform, so it amplified the voice of fake articles and media. That’s it, the ad revenue business model which had benign intentions and enabled exposure of many small businesses to the wider online market, snowballed into an always-getting-better propaganda machine. The sheer power of social media caused Brexit, US capitol riots and many more to come. We are more divided now and Us and Them mentality transformed into Us only and No them mentality and we are forming echo chambers within echo chambers and getting far away from the reality.
We talk about great barriers in human civilization, the apocalypse that will wipe out the entire human civilization, like meteorites in dinosaur civilization. One of the man-made barriers is a nuclear holocaust. As we grow more divided, this reality is not far away. So yes, you won’t be around to lose your jobs to AI.
CONCLUSION
Stanislav Petrov, a lieutenant colonel in Soviet Air Defense force played a crucial role in forestalling nuclear holocaust during the cold war. The soviet ship satellite system malfunctioned and signalled that 5 nuclear missiles are coming their way. Nuclear war was 3 YES away but Stainslav as duty officer said NO, just from his intuition. One man saved the world. Humans have an astute intuition, we can foresee the future and take actions to prevent predicament situations. But development in social media and AI was fast-paced and is growing exponentially. Certainly, we could not assess the repercussions of our development in a fast-moving environment. But we slowly are, governments are regulating social media companies to curtail fake news. Technology companies are open sourcing their results so that the general public can implement them. Practitioners are unravelling the black box and handling the edge cases, instilling our core values in them. Remember, AI is just a tool and it depends on how humans add value to this tool.
Comments
Post a Comment