Author
Ethical issues of AI in criminal and forensic investigations
By: Sofia Haro
INTRODUCTION: In recent years, the integration of artificial intelligence (AI) has played a pivotal role in criminal and forensic investigations. Artificial intelligence is crucial in identifying suspects, evidence analysis, and reducing recidivism. The rising evolution of AI has raised a complexity of ethical concerns within the practices. The introduction of AI has undoubtedly benefited investigations by enhancing investigative tools to perform much more rapidly than human labor. This profound technology has also been shown to offer unmatched accuracy and efficiency. While AI has now become crucial to the government for its computed intelligence and fast-paced solutions, AI challenges the fundamental principles of ethics, privacy, and propriety. In my research, I will be focusing on how the use of artificial intelligence can break ethical boundaries within criminal and forensic investigations. This research embarks on the ethical boundaries of AI applications and the complexities of the current balance of power it holds over the criminal justice system. The technological advancements that were once seen as flawless and an ideal model are now being scrutinized to be harmful to the public’s rights and freedoms. By examining the growing popularity of artificial intelligence, this research will analyze the critical study that AI raises risks and misuse in both criminal and forensic investigations.
Artificial intelligence has grown in popularity in society recently. New technology is now accessible to society on cell phones such as Apple’s Siri voice assistant as well as Chat GPT the most recent controversial AI personal chatbot. The extension of AI technology to the public is a fairly new concept with many unknown long-term risks to privacy and security. Before AI was introduced to the public, it was used in highly significant areas of research such as in the research field and investigative field, and was considered “logical machines”. AI integration in the investigative setting has removed the commonality of human error and flaws which lead to a loss in served justice (Barrington, 2023). Criminal investigation has taken a turn to artificial intelligence for its computed accuracy and rapidity. AI is now seen commonly in criminal and forensic investigations such as digital analysis which is practiced in facial and voice recognition. The common use of AI in the investigative setting produces dangers in security and confidentiality. Not only is the increasing access to artificial intelligence creating risks of security, but the practice of AI in investigations is still relatively new and lacks longevity and sufficient data to rebuttal its dangers.
But how has AI created an unethical presence in criminal investigations? In the criminal justice system, the use of artificial intelligence has been present since the early 2000s. The advancing technology is now used to help connect suspects to a crime, recognize criminal patterns to reduce recidivism, and analyze complex crime scenes. Over the past years, the Pretrial Justice Institute, a well-accomplished reform community, advocated strongly for the introduction of artificial intelligence to become used more consistently in criminal justice (Rigano, 2018). The expansion of AI in criminal justice is not likely to slow down, as older forms of AI are constantly being regenerated to perform higher-yield tasks. The tasks being taken on by AI include examining large volumes of images and videos promptly for the use of evidence can be fatiguing and draining to a human whereas artificial intelligence cannot wear out. The dehumanization of AI technology brings an ethical issue of bias. The generated data from AI has been exercised to train using data inherited from any human sources available. Therefore, an unintentional bias is often created through the discriminatory practices that humans apply (Simonite, 2020). This being said, we must evaluate the risks that artificial intelligence will create in the investigative procedures. For instance, predictive analytics, previously sought to reduce crime have now been shown to bring risks of opacity and injustices increasing racial profiling (Lettieri, 2023). This ethical boundary is relevant to my research as it contradicts the idea that AI is safe and not biased in discrimination. Further in my research, there will be court cases countering the need for a structured use of artificial intelligence in criminal and forensic investigations to ensure the protected boundaries of society.
BACKGROUND: The use of artificial intelligence in criminal and forensic investigations has proven both its capabilities and complexities which steer toward unethical practices and possible violations of safety. The reliance on advancing AI has uncovered the unethical principles that have been in effect since the introduction of artificial intelligence in the investigative field. AI is commonly relied on as a tool in criminal and forensic investigations. To understand the ethical risks of artificial intelligence in the investigative setting, we must understand the capabilities that it possesses. AI has grown tremendously since the 21st century which is also known as the digital age due to the uproar in technology during this period in time. The birth of artificial intelligence began in the early 1950s by two Dartmouth college students which was initially a research workshop. Throughout their research, their predictions of the future of AI were for a machine to be more intelligent than a human. The aspiring students were given a significant amount of funding but they were unsuccessful in engaging artificial intelligence to meet their research standards. Unfortunately, this outcome left the students with no more funding and the research briefly ended. Many other researchers were also unsuccessful in advancing AI technology.
The first recorded breakthrough in artificial intelligence was the Turing test, founded by Alan Turing during the late 1950s. The Turing test was a research method to analyze computer intelligence through an imitation game. Alan Turing proposed a question that would change the outlook of technological abilities. He questioned the idea that machines were far more intelligent than anticipated. Initially, Turing believed that this topic was meaningless and would not spark interest in anyone else. Of course, this was not true as the “imitation game” became widely known as the breakthrough for the future of artificial intelligence. The positive results of the Turing test indicated that machines outperformed the intentions that were set. After the accomplishments of the imitation game, Alan Turing warned others of the dangers that these results would introduce. Turing stated that it would be fairly easy for the machines to build up to the abilities of “having a man inside” (Goncalves, 2023). In other words, Turing believed that the newly found abilities of artificial intelligence would eventually become as biased as a human itself. As this concept evolved so early into the advancement of artificial intelligence, it foreshadowed many of the future implications that AI will face. Once these implications were exposed many people lost hope and interest in artificial intelligence. The outcome of dissatisfaction created what is known as the “AI Winter”, a period in time where artificial intelligence was not studied or funded. From the 1970s through the early 1990s there was a reduced interest in AI and research decreased (Dia, 2022).
The hope for artificial intelligence arose when the World Wide Web was created, ending the AI winter and once again artificial intelligence became a peak interest in expanding technology. But how has artificial intelligence shifted to become involved in the criminal and forensic investigative field as an unethical implication? AI in the investigative field was most developed in the early 2000s to provide quick and sufficient data in recognizing criminal patterns, evidence detection, and limiting crime. The introduction of facial recognition and artificial analytics used in finding suspects was monumental due to the fast data searches from technology. Facial recognition became popular in law enforcement to enhance surveillance resources. Facial recognition technology often uses AI software which creates a template of an individual's facial features and compares them to preexisting images to identify the suspect (Congressional Digest, 2020) Facial recognition technology is often successful in smaller environments where an individual can be compared to a smaller population of images. Although the technology can be valuable, facial recognition technology often fails in large population settings. The accuracy significantly decreases, and studies have shown that facial recognition technology is largely ineffective in facial readings of demographics such as African Americans (Congressional Digest, 2020). As this is only the beginning of issues seen while involving the continuance of AI in investigations, the ethical boundaries will continue to be broken due to the need for limitations set on AI tools.
METHODS: In gathering my research on the raised concerns of ethics in the use of AI in criminal and forensic investigations, I collected sources from a variety of researchers and scholars. While conducting my secondary research I analyzed peer-reviewed articles, academic novels, scholarly news articles, and popular sources involved in the ethical issues of AI. While searching for sources on the origin of artificial intelligence it was difficult because as previously stated, AI went through multiple researchers that consisted over a period of time. The unclear perception of the initial process of AI led to researching a broader aspect of ethical issues developed by researchers such as reliability and accuracy. While analyzing numerous sources from AI specialized researchers there were many similarities in the view of artificial intelligence in the investigative field. Although a variety of sources had different opinions on the ethics of artificial intelligence, all sources I located agreed that artificial intelligence is fairly new therefore there are possible dangerous unknown impacts of its use.
In addition to my secondary research, I also conducted primary research in which I studied how several government officials in different rankings of power view the blueprint for the AI Bill of Rights. I conducted my research by analyzing several articles and conference dialogues from President Biden and his administration on the protection against AI. My research also viewed how legislators and other lawmakers interpret the bill by analyzing the effects the AI Bill of Rights will have on our current constitution. In my research I examined the depths of the bill and if its intentions were true. After my findings in my primary research, it was determined the consequences that artificial intelligence will hold.
FINDINGS: The use of artificial intelligence in the investigative field can introduce many ethical challenges that involve discrimination, equality, and bias. The advancing technology has sought to become unethical for its continued history of discrimination and biases in selecting perpetrators in a crime. To begin, artificial intelligence strongly relies on data sources such as criminal records to analyze recidivism. These AI algorithms function to think as humans by scanning a vast array of information. However, this is an issue since the information is given by a human with personal biases. The National Crime Victimization Survey (NCVS) is the primary contributor to data on criminal patterns which collects responses with required information such as sex, age, and race (Bureau, 2022). The survey allows law enforcement to target highly reported areas of crime. The recent addition of artificial intelligence into criminal investigations has led AI models to collect systematic biases from data such as the NCVS which has irregularly targeted certain demographic groups.
Predictive analytics is a process that offers data for the interpretation of a possible outcome. This technique is often used by police stations in determining any high-risk locations in efforts to reduce criminal activities. Although predictive analytics was originally favorable in solving crime issues, the recent addition of AI has created ethical issues. For instance, predictive analytics which were previously sought to reduce crime have been shown to bring risks of opacity and injustices increasing racial profiling (Lettieri, 2023).
The results of AI’s role in predictive analytics have decreased the accuracy which has contributed to the flaws and ethical barriers artificial intelligence must overcome. This flaw is considered to be predictive bias which is the term used when a demographic is overestimated or underestimated by proxy (Raghavan, 2020). This issue is extremely important and often overlooked. The perception of artificial intelligence not being able to produce errors because it is a programmed machine is far too commonly idealized. The perception of advanced technology being neutral has been proven false. The reasoning behind this observation is caused by humans and their biased data on race, sex, and gender. The effects of this technological error which cannot be reduced without the removal of human data can be extremely damaging to the justice system. A judge may assign a defendant a longer sentence due to their algorithmic score which is not always accurate in depicting a person’s true ideals (Gallo, 2023).A judge may feel obligated to value the predicted results of the test and therefore base their decision on the AI predicted results of the offender. ProPublica, a nonprofit investigative journalism organization has found that a particular artificial intelligence system that is used in courts across the U.S. has been found to mislabel black offenders as harmful reoffenders twice as much as white offenders (Gallo, 2023).
Unfortunately, CCTV footage can be unrecognizable at times for many reasons. The addition of facial recognition, a form of artificial intelligence that can collect videos and images to identify a person has created many ethical implications. Facial recognition is commonly used in criminal investigations but the detections may not always be accurate therefore leading to wrongful arrests. As previously mentioned, facial recognition technology is ineffective in identifying African Americans accurately. This being said, the continuance of facial recognition data used as evidence is severely insufficient. A recent lawsuit filed in 2023 by Randel Quran Reid blamed the state of Louisiana for his wrongful arrest after faulty facial recognition data has raised conversation about the ethics of artificial intelligence as a tool. Randel Reid was arrested in the state of Georgia and held in jail for numerous days with no information of his conviction besides the match from facial recognition being aligned to his identity. Randel Reid has been identified as one of five other black plaintiffs wrongfully convicted of a crime by faulty facial recognition (Thanawala, 2023). This case is one of many to seek justice for the improper results of artificial intelligence. The reliability of AI technology has privacy and protection concerns which are very harmful when inaccurate. It is unknown how many victims have fallen to being convicted of a crime due to the inaccuracy of facial recognition technology.
Many people in favor of facial recognition usage in criminal investigations believe that wrongful arrests are failures not likely to occur frequently. Of course, regardless of the frequency of faulty facial recognition data still leaves victims facing unethical consequences. The International Association of Chiefs of Police has made a statement against the damaging effects of facial recognition in the criminal courts. Their statement examined the depths a police detective should be using facial recognition as a strong clue rather than a fact (Wessler, 2024). This idea has not been adopted as a law as advanced technology is still under the works of legal consideration. Although artificial technology has not reached federal law Detroit’s police department created a policy against facial recognition as evidence stating facial recognition is purely an investigative lead and is not to be identified as a positive subject (Wessler, 2024). The ethics of artificial intelligence inconsistencies and inaccuracies require legal implications and laws to reduce the misconduct artificial intelligence creates.
Discussion: The most recent effort in limiting the implications and dangers of artificial intelligence is the recent creation of the blueprint for the AI Bill of Rights. The AI Bill of Rights is a blueprint to serve as a foundation for responsible use of artificial intelligence. This document was published in October of 2022 but is non-binding and therefore does not constitute U.S. government policy. Although there has been a push to make this bill a U.S. policy, some fear this bill will give too much power to the government. The initial purpose of the AI Bill of Rights is to set ethical guidelines and promote AI use in all aspects including personal use and law use.
The blueprint implemented by President Biden has been described as a call to action to protect citizens from the misuse of artificial intelligence. The White House Office of Science and Technology Policy has determined five principles that are most significant in the protection of AI. The principles stated in the bill are safe and effective systems, algorithmic discrimination protections, data privacy, notice and explanation, and human alternatives, consideration, and fallbacks. The principles help guide the creation and use of AI systems while also protecting the rights of U.S. citizens. In addition to the five principles the bill states, it also includes notes for applying the bill followed by a technical companion for both the government and many organizations to ensure they uphold the ethics of the AI Bill of Rights. I will be reviewing the AI Bill of Rights and examining the depths that it has to protect the safety of the U.S. from artificial intelligence.
The legislative branch consisting of the House of Representatives and the Senate collectively known as Congress is the body of the government that makes all laws for the United States. Lawmakers of the U.S. are crucial in the process of determining whether or not the AI Bill of Rights will provide sufficient protection. The importance of legislation in making proficient laws in the protection of citizens’ rights comes in correlation with the AI Bill of Rights. By implementing the AI Bill of Rights as a law, many legislators believe that there will be little conflict between the bill and current laws. The principles stated in the AI Bill of Rights are also required by the U.S Constitution and U.S existing laws. The similarities in principles have led legislatures to be more in favor of the AI Bill of Rights. After reading a personal statement from the White House official website, legislators view that the current laws have reflected the blueprint of what standards the AI bill should include. For instance, constitutional laws and the bill both enforce a constitutional requirement for human review on criminal investigative concerns as well as statutory requirements for judicial review (Chakrabarti, 2023). This being said, legislators believe that the implementation of the AI Bill of Rights may be necessary as it goes in-depth about the technological privacy protections that the constitution only implies. By proposing this framework, all aspects of technology are protected in the views of legislators. The AI Bill of Rights is viewed as an assistance to the constitutional rights of technology. The White House states that the inclusion of technological companions has created progress in state and local governments responding to emerging technology issues with legislation(Moore, 2023). Overall, legislators state that the Blueprint for the AI Bill of Rights is solely meant to assist the government in allowing AI technology usage as well as private entities in the uprising practice of artificial intelligence.
Judiciaries of the U.S. are dealing with misuse of artificial intelligence cases now more than ever. The fast-paced rise in AI use has made it hard for judges, attorneys, and other legal experts to keep up with the constantly changing AI concepts. In this particular study, I will be focusing on the legal implications of AI rights and how they may affect judicial decisions. To begin, a recent study has shown that federal prosecutors have been instructed to seek harsher sentencing that includes the involvement of AI due to their deemed dangers (Vezke, 2024). It is believed that AI comes with many unknown dangers, especially in crime therefore many judiciaries are focused on reducing the potential risks. Many of the crimes being sentenced harshly include fraud, market manipulation, hacking, and price-fixing. The judicial decision of harsh sentencing reflects their goal of enforcing accountability and repression of rising artificial intelligence crimes. Currently, Judges follow the current laws of technology and the Federal Sentencing Guidelines of 1987. The guidelines are structured for judges to sentence according to the law to serve justice. Due to the recent uprising in artificial intelligence, the Federal Sentencing Guidelines must be interpreted since they don’t adequately address the harms and misuse of artificial intelligence (Vezke, 2024). Due to these limitations, the AI Bill of Rights is often favored by judiciaries. If the AI Bill of Rights were to be passed as a law, it would give judges a structure to follow in the court. The United States Department of Justice sides with judiciaries in favor of the AI Bill of Rights. The Department of Justice is constantly increasing its focus on AI as it aims to measure the misuse of AI to make reforms against AI codes of ethics (Alert, 2021). The approval of the Department of Justice as well as the advocates of judicial review led the AI Bill of Rights in favor of being passed.
President Biden has verbally advocated most for the passing of the AI Bill of Rights. On October 30th, 2022 Biden administered an executive order on safe and impartial use of AI (White House Administration, 2022). This order was the blueprint for an AI Bill of Rights for the U.S. which highlighted several factors such as requiring agencies to protect civil rights in AI governmental programs, and technology accountability as a whole. Biden administers this bill as the beacon of the government policies for artificial intelligence. A personal statement of President Biden was released on the day the bill was presented to the public thus stating that President Biden believes the five principles stated in the bill will guide the use and advancements of AI technology. He continues by ensuring the public that the AI Bill of Rights is a guide for society to protect them from the dangers of AI technology. In efforts to ensure the AI Bill of Rights would be effective, President Biden administered major AI companies to help establish the principles of the blueprint. In doing so, this has allowed the Biden administration to create a model of AI practices. In the personal statement, President Biden shares his experience of using AI such as simple apps on his phone, and states that it makes all of our lives easier and better. For example, The National Weather Service uses artificial intelligence technology to predict weather events from long distances. President Biden then reflects on the importance that artificial intelligence has to our industrialization and economy but he then quickly shifts to the dangers AI technology has. The risks of AI range from scams to fraud and leaking personal information. To reduce these dangers President Biden expressed the importance of enacting the AI Bill of Rights which is why he agreed to an executive order for artificial intelligence safety. To allow a balance between safety and accessibility, the Biden Administration advocates for the commitment of AI safety and security with the passing of the AI Bill of Rights.
CONCLUSION: The ethical considerations surrounding the use of artificial intelligence in criminal and forensic investigations are complex in the way it can be situated. The advancements in artificial intelligence are increasing and have allowed for the faced paced results in investigations and have also reduced the workloads of investigators. The legal considerations of artificial intelligence used in the investigative field have raised concerns about discrimination, bias, and accuracy. These concerns reduce the support for including AI in criminal law due to the unjust protections against AI’s ethical concerns. The ethical considerations of AI also must take into account the unknown circumstances the advanced technology will have in the future. This being said, policymakers, researchers, and advocates must take into consideration the concerns that AI brings. Although the advancement of technology has developed to become an investigative tool it must be navigated with guidelines to protect the safety and privacy of society.
The AI Bill of Rights is a blueprint for creating a legal and safe environment in the use of artificial intelligence. There must be improvements in the bill to ensure that all aspects of discrimination, inaccuracy, and bias are prevented for the future use of artificial intelligence. Ethical frameworks must be established to ensure the safety of the public. Overall, artificial intelligence’s role in criminal and forensic investigations has created conflicts in discriminatory values and has not proven accurate reliability. The promotion of responsible development of artificial intelligence in the investigative setting may allow for a sufficient collaboration between advanced technology and a serve of justice. Ultimately, the ethics of artificial intelligence in criminal and forensic investigations requires a balance of efficiency and protection of human rights. Incorporating measures to ensure human rights are protected with artificial intelligence use will uphold constitutional principles.
WORK CITED
Alert, C. (2021). The impact of artificial intelligence on Federal Criminal Cases: Insights. Stradley Ronon.
Barrington, S., & Farid, H. (2023). A comparative analysis of human and AI performance in forensic estimation of physical attributes. Scientific Reports, 13(1), 1–6. https://doi.org/10.1038/s41598- 023-31821-
3 Bureau, U. C. (2022). National Crime Victimization Survey (NCVS). Census.gov. https://www.census.gov/programs-surveys/ncvs.html
Chakrabarti, S. (2023). Artificial Intelligence And The Law. Journal of Pharmaceutical Negative Results, 14, 87–95.
Congressional Digest (2020). Legal Questions Around Facial Recognition: As technology expands, so do questions about its constitutionality.Congressional Digest, 99(4), 3–5.
Dia. M (2022). A brief history of ai. A Brief History of AI - That’s AI. (n.d.).
Gallo, R. (2023). Algorithms were supposed to reduce bias in criminal justice-do they?. Boston University. https://www.bu.edu/articles/2023/do-algorithms-reduce-bias-in-criminal-justice Gonçalves, B. (2023). The Turing Test is a Thought Experiment. Minds & Machines, 33(1), 1–31.
Moore, R. (2023). ACLU statement on OMB memo for AI use by federal agencies. American Civil Liberties Union.
Rigano, C. (2018). Using artificial intelligence to address criminal justice needs. National Institute of Justice. https://nij.ojp.gov/topics/articles/using-artificial-intelligence-address-criminal-justice-needs Simonite, T. (2020). A brief history of artificial intelligence. National Institute of Justice. https://nij.ojp.gov/topics/articles/brief-history-artificial-intelligence
Thanawala, S. (2023, September 25). Facial recognition technology jailed a man for days. his lawsuit joins others from Black Plaintiffs. AP News.
Vezke, C. (2024). ACLU statement on OMB memo for AI use by federal agencies. American Civil Liberties Union.
Wessler, N. F. (2024). Police say a simple warning will prevent face recognition wrongful arrests. that’s just not true.: ACLU. American Civil Liberties Union.
White House Administration. (2022). Blueprint for an AI bill of rights. The White House. https://www.whitehouse.gov/ostp/ai-bill-of-rights/