notes on technology
exploring new technologies from a humanized point of view.
5 Algorithms that Demonstrate Artificial Intelligence Bias
It is an unfortunate fact of our society that human beings are inherently biased. This may happen consciously where humans are biased towards racial minorities, religions, genders, or nationalities and this may even happen unconsciously where biases develop as a result of society, family, and social conditioning since birth. Whatever the reason, biases do exist in humans and now they are also passed into the artificial intelligence systems created by humans.
These biases can be passed into Artificial Intelligence Bias in AI systems when they are trained on data that includes human biases, historical inequalities, or different metrics of judgement based on gender, race, nationality, sexual orientation, etc. of humans. For example, Amazon found out that their AI recruiting algorithm was biased against women. This algorithm was based on the number of resumes submitted over the past 10 years and the candidates hired. And since most of the candidates were men, so the algorithm also favored men over women. As you can see from this example, biases in Artificial Intelligence causes a lot of damage. This bias hurts the chances of the biased group to participate fully in the world and provide equal benefits to the economy. And while it hurts the groups that the algorithm is biased against, it also hurts the trust of humans in the artificial intelligence algorithms to work without bias. It reduces the chances of Artificial Intelligence being used in all aspects of business and industry as this produces mistrust and the fear that people may be discriminated against. So technical industries that produce these artificial intelligence algorithms need to ensure that their algorithms are bias-free before releasing them in the market. Companies can do this by encouraging research on Artificial Intelligence Bias to eradicate bias in the future. But before this can happen, we also need to know the examples where Artificial Intelligence Bias was demonstrated by different algorithms. So let’s see them so that we can understand what algorithms should not do in the coming times.
Which algorithms demonstrate Artificial Intelligence Bias?
These are some algorithms that have demonstrated Artificial Intelligence Bias. Notably, this bias is always demonstrated against the minorities in a group, such as Black people, Asian people, women, etc.
1. COMPAS Algorithm biased against black people
COMPAS, which stands for Correctional Offender Management Profiling for Alternative Sanctions is an artificial intelligence algorithm created by Northpointe and used in the USA to predict which criminals are more likely to re-offend in the future. Based on these forecasts, judges make decisions about the future of these criminals ranging from heir jail sentences to the bail amounts for release. However, ProPublica, a Pulitzer Prize-winning nonprofit news organization found that COMPAS was biased. Black criminals were judged to be much more likely to re-commit crimes in the future than they committed. On the other hand, white criminals were judged less risky than they were by COMPAS. Even for violent crimes, black criminals were misclassified as more dangerous almost double the time as compared to the white criminals. This discovery in COMPAS proved that it had somehow learned the inherent bias that is frequent in humans, which is, black people commit many more crimes than white people on average and more likely to commit crimes in the future as well.
2. PredPol Algorithm biased against minorities
PredPol or predictive policing is an artificial intelligence algorithm that aims to predict where crimes will occur in the future based on the crime data collected by the police such as the arrest counts, number of police calls in a place, etc. This algorithm is already used by the USA police departments in California, Florida, Maryland, etc. and it aims to reduce the human bias in the police department by leaving the crime prediction to artificial intelligence. However, researchers in the USA discovered that PredPol itself was biased and it repeatedly sent police officers to particular neighborhoods that contained a large number of racial minorities regardless of how much crime happened in the area. This was because of a feedback loop in PredPol wherein the algorithm predicted more crimes in regions where more police reports were made. However, it could be that more police reports were made in these regions because the police concentration was higher in these regions, maybe due to the existing human bias. This also resulted in ina bias in the algorithm which sent more police to these regions as a result.
3. Amazon’s Recruiting Engine biased against women
The Amazon recruiting engine is an artificial intelligence algorithm that was created to analyze the resumes of job applicants applying to Amazon and decide which ones would be called for further interviews and selection. This algorithm was an attempt by Amazon to mechanize their hunt for talented individuals and remove the inherent human bias that is present in all human recruiters. However, the Amazon algorithm turned out to be biased against women in the recruitment process. This may have occurred as the recruiting algorithm was trained to analyze the candidates’ resume by studying Amazon’s response to the resumes that were submitted in the past 10 years. However, the human recruiters who analyzed these resumes in the past were mostly men with an inherent bias against women candidates that were passed on to the AI algorithm. When Amazon studies the algorithm, they found that it automatically handicapped the resumes that contained words like “women” and also automatically downgraded the graduates of two all-women colleges. Therefore Amazon finally discarded the algorithm and didn’t use it to evaluate candidates for recruitment.
4. Google Photos Algorithm biased against black people
Google Photos has a labeling feature that adds a label to a photo corresponding to whatever is shown in the picture. This is done by a Convolutional Neural Network (CNN) that was trained on millions of images in supervised learning and then it uses image recognition to tag the photos. However, this Google algorithm was found to be racist when it labeled the photos of a black software developer and his friend as gorillas. Google claimed that they were appalled and genuinely sorry for this mistake and promised they would correct it in the future. However, all Google had done until 2 years later was removing gorillas and other types of monkeys from Convolutional Neural Network’s vocabulary so that it would not identify any photo as such. Google Photos displayed “no results” for all search terms relating to monkeys such as the gorilla, chimp, chimpanzee, etc. However, this is only a temporary solution as it does not solve the underlying problem. Image labeling technology is still not perfect and even the most complex algorithms are only dependent on their training with no way to identify corner cases in real life.
5. IDEMIA’S Facial Recognition Algorithm biased against black women
IDEMIA’S is a company that creates facial recognition algorithms used by the police in the USA, Australia, France, etc. Around 30 million mugshots are analyzed using this facial recognition system in the USA to check if anybody is a criminal or a danger to society. However, the National Institute of Standards and Technology checked the algorithm and found that it made significant mistakes in identifying back women as compared to white women or even both black and white men. According to the National Institute of Standards and Technology, Idemia’s algorithms falsely matched a white woman’s face at a rate of one in 10, 000 whereas it falsely matched a black woman’s face at a rate of one in 1, 000. This is 10 times more false matches in the case of black women which is a lot! In general, facial recognition algorithms are considered acceptable if their false match rate is one in 10, 000 while the false match rate found for black women was much higher. Idemia claims that the algorithms tested by NIST have not been released commercially and that their algorithms are getting better at identifying different races at different rates as there are physical differences involved in races.
Source: https://www.geeksforgeeks.org/5-algorithms-that-demonstrate-artificial-intelligence-bias/
Alan Turing and his enduring legacy
Alan Turing was born in London in 1912 just two years before the First World War. Growing up in the aftermath of brutal international conflict, Turing’s parents were keen to ensure that their son was able to thrive within education.
Early life
His passion for learning became clear when at the age of 13, the 1926 General Strike prevented Alan from attending his first day of school. Determined not to miss it, Alan Turing cycled 60 miles on his bike unaccompanied, stopping overnight at an inn and attending school the next day.
Achievements and Hardships
It became clear from an early age that Alan Turing was a maths prodigy, and over the course of his life and career Turing pioneered mathematics and computer science, changing the way we see and understand the world. From altering the course of history by breaking the Enigma code at Bletchley Park during the Second World War, through to applying his practical war-time experiences to design the principles of which underlie modern computers, Alan Turing’s legacy has shaped the lives of millions of people.
However, Alan Turing faced much hardship during his life due to his sexuality. During Turing’s life, homosexuality was a criminal offence and Turing was convicted in 1952 of “Gross Indecency”. Alan Turing was faced with an impossibly cruel choice of imprisonment, or probation on the condition he underwent chemical castration. Turing died from suicide two years later.
More than a century since the birth of mathematician Alan Turing, much has changed within the social, political and cultural landscape of the UK. One of the defining markers of change has been the LGBT+ liberation movement, which began in the 1970s and campaigned for equal rights for the gay community.
Thanks to the efforts of activists, historians and politicians, Turing’s legacy has not been forgotten. In 2013,HM Queen Elizabeth II signed a pardon for Turing’s conviction with immediate effect. Since then, the Alan Turing Law has gone on to secure pardons for 75,000 other men and women convicted of similar crimes.
Source: https://educationhub.blog.gov.uk/2021/02/19/lgbt-history-month-alan-turing-and-his-enduring-legacy/
Tips: accessibility for users with low-vision
1. Relative Font Sizes
Use relative font sizes expressed in percentages or ems, rather than absolute font sizes expressed in points or pixels. This practice allows users to make the text larger or smaller as desired—an important feature for users with low vision. See Tips for Computer Users With Low Vision for more details.
2. Provide Alternative Text for Images
It is absolutely necessary to provide text equivalents for all meaningful graphics. If the graphic includes text, be sure that the alternative text (often referred to as an "alt tag") supplies all of the words.
Example:
<img src="graphic.gif" alt="Acme - supplying widgets since 1945" width="50" height="20">
That said, providing "alt" text for spacers or placeholding graphics subjects the speech user to meaningless information. Spacers or graphics used only for positioning should be labeled with alt=" " (quote, space, quote). Note that you should never completely omit the alt tag, even for placeholding graphics. This omission subjects the speech user to hearing the file name of the image.
If you are in doubt as to whether or not to describe an image with an alt tag, do it.
3. Name Links Carefully
Users often move through a page by tabbing from link to link. Never use "Click here" or "Learn More" as the text for your links. "Download SuperSoftware 4.8" is self-explanatory.
4. Explicitly State Information
Do not use indentation or color alone, for example, to convey meaning.
Example:
Indicating required fields in a form by making them bold is bad. Indicating required fields by using a phrase such as "required" is ok.
5. Provide Skip Links
Skip links allow the speech software or braille user to bypass information that is repeated on every page, such as navigation bars. Speech and braille users generally read the page from top to bottom, and consequently are subjected to repeated information before reaching the heart of the page. Skip links allow these users to jump past the repetitive navigation links to get to the main content on the page.
To implement skip links, place a link before the repeated information as follows:
<a href="#content"><img src="empty.gif" height="15" border="0" alt="Skip Main Navigation" width="5"></a>
and place an anchor at the beginning of unique copy:
<a name="content"></a>
The image can be transparent, so that the visual display is not affected, but speech and braille users can hear the alt tag reading "skip main navigation".
It is also possible to use a text link that literally says <a href="#content">skip main navigation</a>, if you prefer. We do not recommend using an "invisible" text link, like a single space or underscore. That technique relies on the link's title to provide accessibility, and not all screen readers provide the link title to the user.
Source: https://www.afb.org/consulting/afb-accessibility-resources/tips-and-tricks
next 3 posts
all posts
© n-o-t
2023