Computing Prejudices in Modern Day Society
- Huroulain Saffiya Sheikh
- Mar 18
- 4 min read
In a world in which artificial intelligence has become rampant, the use becoming increasingly noticeable in day-to-day tasks, legal inquiries have risen circulating around the use of artificial intelligence and the possible doors of exploitation. Specifically, the discourse around the type of information that generative technologies are fed, and the biases that may play out in its responses.
Uses Of AI in the Workplace
According to the Center of Labor & A Just Economy Center at Harvard Law School, artificial intelligence has been a complete game changer for the work experience, specifically the autonomy of workers, health, and safety. Concerns have been circulating about safety, the collection and surveillance of workers' data, and organizational strategies.
Specifically, it dives into three major factors: the objective of state intervention, preemption risks, and current laws proposed for state and local action.
State intervention has been demonstrated in the form of amendments to previous policy, exemplified in California’s 2023 Executive Order, in which state agencies had to embark on a task examining the risks of generative artificial intelligence towards communities, particularly the effects on goods and services and access towards them, as well as potential threats towards the energy infrastructure.
Meanwhile in Texas, they instituted a law in 2023 that requires the Texas AI Advisory Council to review any inventory reports that may have been automated and advises state agencies to be wary and exclude any entities that may have had artificial intelligence involved. This isn’t extended to common components, such as spell check or spam filters.
However, diving more into the preconceptions of discrimination and possibilities of bias, states like Connecticut and New York, have worked towards assessing systems that have been utilizing artificial intelligence in their work and breaking down the possible outcomes of discrimination before it is used in the field, even mirroring Canada’s framework for responsible use of artificial intelligence and bias testing. Overall, across the country, state governments are working towards inputting human review before employing artificial intelligence into the field.
Biases in Artificial Intelligence
Artificial Intelligence has a series of biases that it is naturally prone towards. Information has an array of vulnerabilities that allows distortions to seep through. Notably, there are five main categories of biases that are prevalent: selection, confirmation, measurement, stereotyping, and out-group homogeneity bias.
Selection bias occurs when the data presented to an artificial intelligence system does not accurately reflect the reality it is meant to model. This can be due to incomplete data, samplings that were surveyed in a biased manner, or a data set underrepresenting a particular demographic.
Often to combat this bias, the data sets utilized have to have an enrichment of perspectives to adequately create an output that includes underrepresented dimensions.
Confirmation bias in an artificial intelligence setting, this is exemplified through relying on pre-existing trends or data sets that have a set of beliefs prior to analyzing new patterns and trends. In a setting where data is often streamed in, the artificial intelligence bot may focus on confirming those prior beliefs, rather than allowing a means of exposing the user to new foreign ideas on that topic being inquired upon. In that sense, it is confirming those beliefs, rather than allowing the user to be exposed to new information or answers towards those questions.
Measurement bias is when data collection systems differ from variables of interest and its original source. This often occurs when the researcher in charge of the data fails to collect accurate variables to accumulate the right data. For example, if a course is attempting to collect data about the competence of a professor’s teaching, but only collects data from students who passed, the results may be skewed.
Stereotyping bias usually occurs when harmful stereotypes are reinforced, usually occurring when the lack of information is given, resulting in inaccurate depictions. Often, if the data outputs a depiction that reinforces previous stereotypes, such as white men dominating positions of leadership, when asked to see a boss in the corporate field.
Finally, out-group homogeneity bias is when an artificial intelligence system isn’t capable of identifying individuals who are not part of a majority in the training data given, often leading to misclassification. For instance, an artificial intelligence bot believes that everyone on a basketball team is bound to look the same, rather than understanding there are key different physical features players could have.
Artificial intelligence has been demonstrated in the media to have played a major role in deepening relationships between racial and economic inequalities and has been claimed time and time again to have a harder time identifying individuals with darker skin complexions.
With these components in mind, the intersection of artificial intelligence and data including possible chances of perpetrating disparities, it’s important to analyze what action governments are taking to combat these issues.
Executive Order 14141
Executive Order 14141, also known as the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, is essentially an outline established to protect national security and to ensure competitiveness within the country's economic sector. The executive order had originally been outlined by the previous president, Joe Biden, and has been revoked by Trump upon his return to office, which was claimed to prevent the development of artificial intelligence from being hindered.
In particular, the executive order had advocated for the United States to take the initiative of combining artificial intelligence with clean power and requiring higher standards of labor for those working with it.
However, this has been criticized for being restrictive upon artificial intelligence and was revoked for being deemed as dangerous and inconsistent in allowing the United States to take charge of artificial intelligence advancement. The Executive Order 14141 was replaced with a new executive order establishing less government oversight pertaining to artificial intelligence.
With these recent advancements, it’s unknown what direction the intersection of law and artificial intelligence will take, especially with the constant vigilance over maintaining a state of fairness for all users. Understanding and allowing a space for new technological advances to occur, while simultaneously understanding where to place proper boundaries under the legislative system. Technology as a whole will continuously evolve and the law must be able to adapt to these ever-growing changes, as it is unknown what future innovations await society, in which we must take initiative now to welcome these changes and adjust before it may reign free outside of legislative boundaries.
Image Source: Pixabay
Comments