A new tactic appears to have arisen in the use of generative AI in social media. This tactic is comprised of rapidly generating seemingly knowledgeable responses to other people. One of the many challenges of this approach is that the generative AI is so lightweight in terms of understanding (It has none), that it mistakes words for other words, and that it misinterprets at every level when it comes to meaningful differences between things like impossible and undecidable, intrusion and anomaly, and other such things.
Talking to idiots
Generative AI may be used to trick people that actually know what they’re talking about into engaging in discourse trying to correct the foolishness, or it may be used to try to trick people who don’t know the difference into believing that someone has expertise they don’t have.
Which brings us to…
The Turing test is about a human being differentiating between a computer and another person, and at this point telling the difference between generative AI BS and human ignorance is becoming more difficult. Arguably computers have now reached the point where for two short interactions they are indifferentiable from humans, but only because they do it by copying and regurgitating what humans have already said. A digital tape recorder with a bad synonym dictionary rewriting words and that flips between different recordings on an ongoing basis, edited by a grammar teacher and spell checker.
We have found another way to waste our time
It is reasonably predictable that as humans generate more technology they will become weaker and less able to use their bodies and minds. Many science fiction pieces have been done on evolved species falling by the wayside as they have less and less to do and end up in ridiculous pursuits that get them nowhere. Some depict it as hell, while others merely call it a twilight zone.
But the bigger problem in my view is that the introduction of generative AI into social media creates a vast waste of time and further disruption of the already failing human discourse.
It takes about two interactions to tell that somebody is not using actual reason and logic to understand things, and isn’t reading and understanding with a depth a human being would normally use. Thus you can now conclude that this is just somebody using AI as a cudgel against others in a few interactions.
Worse than search
Unlike a search engine where people look for sources of information, read those sources of information, evaluate them along with the content they present, and then may reference them and make statements regarding them, the generative AI version fails the user on both sides of the equation.
- The person using it as a cudgel gains no knowledge
- The person that’s being used against wastes their time.
Frankly, I think it should be banned from use in social media, specifically because social media was created for people to be able to interact with other people through computers.
Prior abuses of people through social media
This is not the first time social media has been used to exploit people. For example:
- It’s use for psychological experimentation has always been problematic.
- Psychologists somehow determined that it’s acceptable to push information into this medium and observe responses in order to gain understanding of how individual and group behavior works. The fundamental underlying problem with this approach is not that it generates poor results. Rather it has two underlying problems; (1) it interferes with the population it’s experimenting on and therefore has the problem of almost any scientific measurement of altering the very thing being experimented upon, and (2) we have the notion of informed consent in psychological experimentation where we inform subjects that they are being experimented on prior to the experimentation and gain their consent for their use for this purpose, then after the experiment, participants are told what the experiment was, how it worked, and what the outcomes were.
- This rule apparently has been misinterpreted as adequately explained after the fact by formal publication in scientific journals, conferences, and elsewhere. but informed consent has always been personal to the individuals being experimented upon. It is routinely violated in the computing media.
- But let’s get to the reality here. These are requirements for scientific experiments, and there is no such legal requirement nor any punishment associated with the failure to undertake these sorts of controls in the private arena. Thus we get unlimited abuse and use of human interactions to advantage the social media companies.
- However, in all fairness they are providing a “free” service, where the free is actually a fee paid in the form of whatever information they can extract from your brain and money they can extract from your pocket or others’ through the elicitation.
- It’s abuse for political influence operations has always been disingenuous.
- Politics is often disingenuous of course, but the introduction of political influence operations into and through social media by nation states against other nation states to disrupt and destroy the body politic has risen to new levels with the availability of social media giving direct access to most of the population of the world.
- At different time scales, the power of propaganda methods have been used for everything ranging from near-real-time starting and supporting flash mob riots to long-term creation of echo chambers where members of groups are separated from each other with internal feedback used to induce hatred and divide populations for conquer.
- Of course this view is based on the idea that freedom and peace are the things we should all seek in the world, but the reality is a significant portion of the world population does not buy into that view, and the desire for violence and abuse that passes from generation to generation continues to be an alternative success strategy for much of the human race. This goes back to the biblical depiction of two brothers, one violent and impetuous, the other peaceful and thoughtful.
- It’s abuse by the companies who create and own it.
- The companies creating social media platforms have always used it to extract and elicit from its users for commercial and personal gain by the owners. While this may go against the perception of the very concept of the notion of social media, this is just the result of marketing social media as a new and better way for people to interact.
- Social media in the form of large group communication through writing messages to a common space on a server has existed certainly since the 1970s with the emergence of popular news and bulletin board systems. Even before that there were email lists starting soon after the first ARPANET nodes were in place in 1971 or so, and before that within individual computers there were limited similar mechanisms in defense information systems networks (e.g., DISN). Light surveillance of such systems existed in those early days, but normally any records were only sought or found as a result of searches performed post-events. It was generally considered inappropriate to surveil such things, even though SYSOPS certainly looked at messages in forums from the beginning.
- It was really with the emergence of Facebook that the rapid non-notified rule changes were put in place to bypass and undo privacy settings and larger scale commercial abuse of social media come into widespread application. Advertising was more obvious then and sale of data was not really advertised widely even if it was being done. But push polls, etc. started.
The emergence of psychology in social media
In the 1970s SRI International started to do significant research into the desire for people to want beyond their needs for the purposes of advertising. This ultimately resulted in studies leading to things like the Values and Lifestyles (VALS) analysis since used to identify characteristics of buyers to help in targeting of marketing and sales processes.
Advances in the use of computers for marketing and advertising started in that time frame and have continued to drive commercial expansion ever since. Financial benefits of various things that are bad for society and individuals have continued to grow through the use of computers since that time. And with the emergence of social media came a whole new capacity to individualize messaging down to the individual communication of specific messages to each individual at the right moment to drive their decision making process.
Real-time marketplaces for real-time advertisement over the Internet have now driven sub-millisecond purchases of advertisement placements on user screens with real-time pricing on a competitive bidding basis for more than a decade. This was largely driven by statistical models. The drive to understanding why they buy has been less important than that they are ready to buy and when.
But in social media, popularity became a surrogate for expertise, following from the popularity of media stars in the film and television industries more than a century ago, itself following from the popularity of stage actors before that. Similarly, books were followed by recordings were followed by movies, were followed by videos, etc. in driving popularity of ideas, brands, and images into the minds of larger populations.
The emerging ability to push influence
There is a significant difference I distinguish between pushing advertisements and pushing influence, and this has emerged in the fully automated sense very recently. We are only at the beginning of this new era, but there are analogies and they can reasonably be used to anticipate what comes next.
Today’s generative AI is merely a toy version of the likely application of psychology to influence (you might choose the word control instead) populations by influencing individuals on a case by case basis.
Starting with the early work of Karrass in negotiations and the many psychologists who have studied and cataloged human cognitive errors, “A method and/or system that can be implemented on a computing device … uses a rule set to evaluate data about a situation and actors in order to provide advice regarding strategies for influencing actors and/or other outputs.” was developed in the mid-2000s for automating influence analysis to drive group decisions through strategic communications. This offered a human interface for such influence operations based on results of psychological and sociological studies. This is just the beginning.
Studies of decision-making have long shown that human decisions are made based on sentiment and justified based on logic. The emergence of sentiment analysis, techniques based on linguistic inquiry and word count (LIWC), so-called “emotional marketing value”, and related areas of tying the use of words in context for analysis of sentiment is yielding to the use of analysis for design, as it has in so many other areas over time, and will soon yield automation in generation of sentiments. This will start as human assistance software but emerge over time as fully automated if big data is applied and measurement is improved. This is just the beginning.
The scalable automation of individual influence
Barriers to the direct scalable automation of individualized influence has long been the ability to put forth individually targeted, credible, convincing, specific messaging in sequence with feedback. Looking at the history of the field, a whispered voice in the ear of the dictator has been moving over time toward the direct control of the mental processes of every person. While we do not yet have the ability to directly control the neural pathways of the human brain, science and technology is on the verge of the ability to control most of the visual and audio observables of most of the individual people of the world most of their waking hours. By increasingly enveloping the major drivers of cognition the potential to influence each person in an orchestrated play of the author’s choosing is coming into focus over time.
We are not there yet, but the more convincing generative AI combined with the more comprehensive ability to observe and control the actions and observations of most people and the analytical power to apply external feedback to drive the decision-making process has the realistic potential to allow those in control of the technology to take control of everything else in human society.
What do you believe in?
If you believe in freedom of speech, the concept of the marketplace of ideas emerges.
If you believe in freedom of peaceful action, the concept of self-enrichment emerges.
If you have these two things and the technology of scalable influence, then the people with enough wealth and the desire to use it this way will eventually win out in the marketplace of ideas and control the society.
This brings us to the concept of …
Monopoly, antitrust, oligarchy, anarchy, and autocracy
At some point, governance falls over. That is the history of the world under human control.
The very freedoms most people seek lead to some people being more materially successful than others. This then leads to the corruption of power that leads to the grab for power that leads to oligarchy, and with the creation of disillusionment, the emergent autocrat claiming to solve all our problems by forceful action then taking that forceful action against us. The consolidation of wealth and therefore power comes into the realm of monopoly, whether financial or cognitively through the monopoly of the emergent mind of the people through influence. Antitrust to break up the monopolies may stave this off, but as influence operations succeed, this approach breaks down, and tradition decays with time and in the face of corruption. Anarchy may take over from autocracy, if it can emerge, but it can also lead to autocracy as the resolution to the revolution. Such are the historical patterns of human group behaviors in societies writ large.
Large-scale intentional influence operations are rarely used to fuse a society into a body politic for the well-being of the population. And today’s influence operations in social media don’t help to build up a civil society. They act to break it down, and they do so by the bad will of the people who abuse it, against the good will of the people who use it as a marketplace of ideas. Scalable automated influence operations puts a thumb on the scale of the marketplace of ideas slanting the balance of equities toward predetermined outcomes.
A new form of governance
The age of the wise computer overlord is a long way away. And the concept of the benevolent dictator falls under the fact that power corrupts. What we need is a new form of governance. Democracy only thrives with a well-informed public. Influence operations in social media is used today to push the opposite effect. Government control over information leads to the state media propaganda machine, while private control leads to private propaganda. There has to be some balance of power to mitigate the weighting of the scales, and that balance historically works better with more centers of power forcibly disconnected from each other.
The nature of people fights against governance
One of the things that seems universal in the nature of people is that any system that’s put in place that has rules will be gamed. Over time, people game systems with increasing success. In order to counter the gaming, regulations are typically put in place, but the regulations continue to grow without bound over time in this environment. The regulations themselves and the process by which they are created introduce corruption into the system as people corrupt the decision makers that devise and put the governance in place.
People will take advantage
It seems like the only reasonable solution to this is to keep changing the system. The problem with changing the system at too high a pace is it a creates a lack of stable environment which creates a lot of inefficiencies and causes people to not be able to rely on the world that they operate in. So it seems as long as people wish to take advantage, people will learn to take advantage, and that’s all there is to it.
Amplification
Fraudsters have taken up the abuse of AI on social media by improving their use of such methods to generate better content, as have legitimate advertisers. But the real threat in the arena comes from those who have used analytics to focus individual and small group messages from multiple embedded perspectives so as to be able to scale moving groups of people individually and to do so for millions of groups simultaneously. This is where the real breakthrough in influence is being sought and taking place.
Back to the challenge of social media
As a system of governance, we notionally think that taking into account the will of the people is fundamental to their happiness and success. This has certainly panned out over time, at least for the last several hundred years. Various experiments with democracies have worked to differing degrees, but fundamental to all of these approaches is that the information in the hands and the heads of the people are reflected in the governance they create and their willingness to be governed by it.
It seems that there is no end to the potential for influence operations, and that is we build culture, that culture will influence the way everything happens. But what we have today is a dramatic clash of cultures caused by the increased rate of integration of information within and between those cultures. People have not yet gotten used to the concept that they can live in any cultural mechanism they wish, and do so in peace and comfort if that is the culture they choose to live in. That is in no small part because those that believe in a culture of heavy competition, physical power over emotional and rational power, and in the subversion of the will of others for their own advantage, tend to be at an advantage because almost any set of rules and expectations can be defeated by those who choose to violate them unless they can almost immediately be caught and punished.
It seems the feedback systems do not allow for a stable system of governance where everybody can have everything they want and this is not likely a limitation of resources in situations where resources are plentiful, people still seek to take advantage and have more resources than others.
Conclusions
The age of social media has accelerated the age of influence well beyond that of the broadcast era. And the age of automation, yielding over time through the application of psychology to increasing automation of customized influence over individuals and groups, has led to the present and rapidly evolving situation of control over the body politic. Where will it end? Influence operations have always been the basis of all human society and interaction. And they likely always will be. We are pack animals, and our long-term survival depends on it. The perhaps more important question is who will win the influence wars. Unless the wealthy good people of the world engage in a serious way, I think it will be the wealthy bad people.
Click Here to Enroll for an Online Certification Course in Artificial Intelligence