Showing posts with label Explainable AI. Show all posts
Showing posts with label Explainable AI. Show all posts

Artificial Intelligence - Machine Learning Regressions.

 


"Machine learning," a phrase originated by Arthur Samuel in 1959, is a kind of artificial intelligence that produces results without requiring explicit programming.

Instead, the system learns from a database on its own and improves over time.

Machine learning techniques have a wide range of applications (e.g., computer vision, natural language processing, autonomous gaming agents, classification, and regressions) and are used in practically every sector due to their resilience and simplicity of implementation (e.g., tech, finance, research, education, gaming, and navigation).

Machine learning algorithms may be generically classified into three learning types: supervised, unsupervised, and reinforcement, notwithstanding their vast range of applications.

Supervised learning is exemplified by machine learning regressions.

They use algorithms that have been trained on data with labeled continuous numerical outputs.

The quantity of training data or validation criteria required once the regression algorithm has been suitably trained and verified will depend on the issues being addressed.

For data with comparable input structures, the newly developed predictive models give inferred outputs.

These aren't static models.

They may be updated on a regular basis with new training data or by displaying the actual right outputs on previously unlabeled inputs.

Despite machine learning methods' generalizability, there is no one program that is optimal for all regression issues.

When choosing the best machine learning regression method for the present situation, there are a lot of things to think about (e.g., programming languages, available libraries, algorithm types, data size, and data structure).





There are machine learning programs that employ single or multivariable linear regression approaches, much like other classic statistical methods.

These models represent the connections between a single or several independent feature variables and a dependent target variable.

The models provide linear representations of the combined input variables as their output.

These models are only applicable to noncomplex and small data sets.

Polynomial regressions may be used with nonlinear data.

This necessitates the programmers' knowledge of the data structure, which is often the goal of utilizing machine learning models in the first place.

These methods are unlikely to be appropriate for most real-world data, but they give a basic starting point and might provide users with models that are straightforward to understand.

Decision trees, as the name implies, are tree-like structures that map the input features/attributes of programs to determine the eventual output goal.

The answer to the condition of that node splits into edges in a decision tree algorithm, which starts with the root node (i.e., an input variable).

A leaf is defined as an edge that no longer divides; an internal edge is defined as one that continues to split.

For example, age, weight, and family diabetic history might be used as input factors in a dataset of diabetic and nondiabetic patients to estimate the likelihood of a new patient developing diabetes.

The age variable might be used as the root node (e.g., age 40), with the dataset being divided into those who are more than or equal to 40 and those who are 39 and younger.

The model provides that leaf as the final output if the following internal node after picking more than or equal to 40 is whether or not a parent has/had diabetes, and the leaf estimates the affirmative responses to have a 60% likelihood of this patient acquiring diabetes.

This is a very basic decision tree that demonstrates the decision-making process.

Thousands of nodes may readily be found in a decision tree.

Random forest algorithms are just decision tree mashups.

They are made up of hundreds of decision trees, the ultimate outputs of which are the averaged outputs of the individual trees.

Although decision trees and random forests are excellent at learning very complex data structures, they are prone to overfitting.

With adequate pruning (e.g., establishing the n values limits for splitting and leaves) and big enough random forests, overfitting may be reduced.

Machine learning techniques inspired by the neural connections of the human brain are known as neural networks.


Neurons are the basic unit of neural network algorithms, much as they are in the human brain, and they are organized into numerous layers.

The input layer contains the input variables, the hidden layers include the layers of neurons (there may be numerous hidden levels), and the output layer contains the final neuron.

A single neuron in a feedforward process 

(a) takes the input feature variables, 

(b) multiplies the feature values by a weight, 

(c) adds the resultant feature products, together with a bias variable, and 

(d) passes the sums through an activation function, most often a sigmoid function.


The partial derivative computations of the previous neurons and neural layers are used to alter the weights and biases of each neuron.

Backpropagation is the term for this practice.


The output of the activation function of a single neuron is distributed to all neurons in the next hidden layer or final output layer.

As a result, the projected value is the last neuron's output.

Because neural networks are exceptionally adept at learning exceedingly complicated variable associations, programmers may spend less time reconstructing their data.

Neural network models, on the other hand, are difficult to interpret due to their complexity, and the intervariable relationships are largely hidden.

When used on extremely big datasets, neural networks operate best.

They need meticulous hyper-tuning and considerable processing capacity.

For data scientists attempting to comprehend massive datasets, machine learning has become the standard technique.

Machine learning systems are always being improved in terms of accuracy and usability by researchers.

Machine learning algorithms, on the other hand, are only as useful as the data used to train the model.

Poor data produces dramatically erroneous outcomes, while biased data combined with a lack of knowledge deepens societal disparities.

 



Jai Krishna Ponnappan


You may also want to read more about Artificial Intelligence here.



See also: 


Algorithmic Bias and Error; Automated Machine Learning; Deep Learning; Explainable AI; Gender and AI.



Further Reading:


Garcia, Megan. 2016. “Racist in the Machine: The Disturbing Implications of Algorithmic Bias.” World Policy Journal 33, no. 4 (Winter): 111–17.

GĂ©ron, Aurelien. 2019. Hands-On Machine Learning with Scikit-Learn and TensorFlow: Concepts, Tools, and Techniques to Build Intelligent Systems. Sebastopol, CA: O’Reilly.



Artificial Intelligence - Gender and Artificial Intelligence.

 



Artificial intelligence and robots are often thought to be sexless and genderless in today's society, but this is not the case.

Humans, on the other hand, encode gender and stereo types into artificial intelligence systems in a similar way that gender is woven into language and culture.

The data used to train artificial intelligences has a gender bias.

Biased data may cause significant discrepancies in computer predictions and conclusions.

These differences would be said to be discriminating in humans.

AIs are only as good as the people who provide the data that machine learning systems capture, and they are only as ethical as the programmers who create and supervise them.

Machines presume gender prejudice is normal (if not acceptable) human behavior when individuals exhibit it.

When utilizing numbers, text, graphics, or voice recordings to teach algorithms, bias might emerge.

Machine learning is the use of statistical models to evaluate and categorize large amounts of data in order to generate predictions.

Deep learning is the use of neural network topologies that are expected to imitate human brainpower.

Data is labeled using classifiers based on previous patterns.

Classifiers have a lot of power.

By studying data from automobiles visible in Google Street View, they can precisely forecast income levels and political leanings of neighborhoods and cities.

The language individuals employ reveals gender prejudice.

This bias may be apparent in the names of items as well as how they are ranked in significance.

Beginning with the frequency with which their respective titles are employed and they are referred to as men and women vs boys and girls, descriptions of men and women are skewed.

The analogies and words employed are skewed as well.

Biased AI may influence whether or not individuals of particular genders or ethnicities are targeted for certain occupations, whether or not medical diagnoses are correct, whether or not they are able to acquire loans, and even how exams are scored.

"Woman" and "girl" are more often associated with the arts than with mathematics in AI systems.

Similar biases have been discovered in Google's AI systems for finding employment prospects.



Facebook and Microsoft's algorithms regularly correlate pictures of cooking and shopping with female activity, whereas sports and hunting are associated with masculine activity.

Researchers have discovered instances when gender prejudices are purposefully included into AI systems.

Men, for example, are more often provided opportunities to apply for highly paid and sought-after positions on job sites than women.

Female-sounding names for digital assistants on smartphones include Siri, Alexa, and Cortana.

According to Alexa's creator, the name came from negotiations with Amazon CEO Jeff Bezos, who desired a virtual assistant with the attitude and gender of the Enterprise starship computer from the Star Trek television program, which is a woman.

Debo rah Harrison, the Cortana project's head, claims that their female voice arose from studies demonstrating that people react better to female voices.

However, when BMW introduced a female voice to its in-car GPS route planner, it experienced instant backlash from males who didn't want their vehicles to tell them what to do.

Female voices should seem empathic and trustworthy, but not authoritative, according to the company.

Affectiva, a startup that specializes in artificial intelligence, utilizes photographs of six million people's faces as training data to attempt to identify their underlying emotional states.

The startup is now collaborating with automakers to utilize real-time footage of drivers to assess whether or not they are weary or furious.

The automobile would advise these drivers to pull over and take a break.

However, the organization has discovered that women seem to "laugh more" than males, which complicates efforts to accurately estimate the emotional states of normal drivers.

In hardware, the same biases might be discovered.

A disproportionate percentage of female robots are created by computer engineers, who are still mostly male.

The NASA Valkyrie robot, which has been deployed on Shuttle flights, has breasts.

Jia, a shockingly human-looking robot created at China's University of Science and Technology, has long wavy black hair, pale complexion, and pink lips and cheeks.

She maintains her eyes and head inclined down when initially spoken to, as though in reverence.

She wears a tight gold gown that is slender and busty.

"Yes, my lord, what can I do for you?" she says as a welcome.

"Don't get too near to me while you're taking a photo," Jia says when asked to snap a picture.

It will make my face seem chubby." In popular culture, there is a strong prejudice against female robots.

Fembots in the 1997 film Austin Powers discharged bullets from their breast cups, weaponizing female sexuality.

The majority of robots in music videos are female robots.

Duran Duran's "Electric Barbarella" was the first song accessible for download on the internet.

Bjork's video "The Girl And The Robot" gave birth to the archetypal white-sheathed robot seen today in so many places.

Marina and the Diamonds' protest that "I Am Not a Robot" is met by Hoodie Allen's fast answer that "You Are Not a Robot." In "The Ghost Inside," by the Broken Bells, a female robot sacrifices plastic body parts to pay tolls and reclaim paradise.

The skin of Lenny Kravitz's "Black Velveteen" is titanium.

Hatsune Miku and Kagamine Rin are anime-inspired holographic vocaloid singers.

Daft Punk is the notable exception, where robot costumes conceal the genuine identity of the male musicians.

Sexy robots are the principal love interests in films like Metropolis (1927), The Stepford Wives (1975), Blade Runner (1982), Ex Machina (2014), and Her (2013), as well as television programs like Battlestar Galactica and Westworld.

Meanwhile, "killer robots," or deadly autonomous weapons systems, are hypermasculine.

Atlas, Helios, and Titan are examples of rugged military robots developed by the Defense Advanced Research Projects Agency (DARPA).

Achilles, Black Knight, Overlord, and Thor PRO are some of the names given to self-driving automobiles.

The HAL 9000 computer implanted in the spacecraft Discovery in 2001: A Space Odyssey (1968), the most renowned autonomous vehicle of all time, is masculine and deadly.

In the field of artificial intelligence, there is a clear gender disparity.

The head of the Stanford Artificial Intelligence Lab, Fei-Fei Li, revealed in 2017 that her team was mostly made up of "men in hoodies" (Hempel 2017).

Women make up just approximately 12% of the researchers who speak at major AI conferences (Simonite 2018b).

In computer and information sciences, women have 19% of bachelor's degrees and 22% of PhD degrees (NCIS 2018).

Women now have a lower proportion of bachelor's degrees in computer science than they did in 1984, when they had a peak of 37 percent (Simonite 2018a).

This is despite the fact that the earliest "computers," as shown in the film Hidden Figures (2016), were women.

There is significant dispute among philosophers over whether un-situated, gender-neutral knowledge may exist in human society.

Users projected gender preferences on Google and Apple's unsexed digital assistants even after they were launched.

White males developed centuries of professional knowledge, which was eventually unleashed into digital realms.

Will machines be able to build and employ rules based on impartial information for hundreds of years to come? In other words, is there a gender to scientific knowledge? Is it masculine or female? Alison Adam is a Science and Technology Studies researcher who is more concerned in the gender of the ideas created by the participants than the gender of the persons engaged.

Sage, a British corporation, recently employed a "conversation manager" entrusted with building a gender-neutral digital assistant, which was eventually dubbed "Pegg." To help its programmers, the organization has also formalized "five key principles" in a "ethics of code" paper.

According to Sage CEO Kriti Sharma, "by 2020, we'll spend more time talking to machines than our own families," thus getting technology right is critical.

Aether, a Microsoft internal ethics panel for AI and Ethics in Engineering and Research, was recently established.

Gender Swap is a project that employs a virtual reality system as a platform for embodiment experience, a kind of neuroscience in which users may sense themselves in a new body.

Human partners utilize the immersive Head Mounted Display Oculus Rift and first-person cameras to generate the brain illusion.

Both users coordinate their motions to generate this illusion.

The embodiment experience will not operate if one does not correlate to the movement of the other.

It implies that every move they make jointly must be agreed upon by both users.

On a regular basis, new causes of algorithmic gender bias are discovered.

Joy Buolamwini, an MIT computer science graduate student, discovered gender and racial prejudice in the way AI detected individuals' looks in 2018.

She discovered, with the help of other researchers, that the dermatologist-approved Fitzpatrick The datasets for Skin Type categorization systems were primarily made up of lighter-skinned people (up to 86 percent).

The researchers developed a skin type system based on a rebalanced dataset and used it to compare three gender categorization systems available off the shelf.

They discovered that darker-skinned girls are the most misclassified in all three commercial systems.

Buolamwini founded the Algorithmic Justice League, a group that fights unfairness in decision-making software.


Jai Krishna Ponnappan


You may also want to read more about Artificial Intelligence here.


See also: 

Algorithmic Bias and Error; Explainable AI.


Further Reading:


Buolamwini, Joy and Timnit Gebru. 2018. “Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification.” Proceedings of Machine Learning Research: Conference on Fairness, Accountability, and Transparency 81: 1–15.

Hempel, Jessi. 2017. “Melinda Gates and Fei-Fei Li Want to Liberate AI from ‘Guys With Hoodies.’” Wired, May 4, 2017. https://www.wired.com/2017/05/melinda-gates-and-fei-fei-li-want-to-liberate-ai-from-guys-with-hoodies/.

Leavy, Susan. 2018. “Gender Bias in Artificial Intelligence: The Need for Diversity and Gender Theory in Machine Learning.” In GE ’18: Proceedings of the 1st International Workshop on Gender Equality in Software Engineering, 14–16. New York: Association for Computing Machinery.

National Center for Education Statistics (NCIS). 2018. Digest of Education Statistics. https://nces.ed.gov/programs/digest/d18/tables/dt18_325.35.asp.

Roff, Heather M. 2016. “Gendering a Warbot: Gender, Sex, and the Implications for the Future of War.” International Feminist Journal of Politics 18, no. 1: 1–18.

Simonite, Tom. 2018a. “AI Is the Future—But Where Are the Women?” Wired, August 17, 2018. https://www.wired.com/story/artificial-intelligence-researchers-gender-imbalance/.

Simonite, Tom. 2018b. “AI Researchers Fight Over Four Letters: NIPS.” Wired, October 26, 2018. https://www.wired.com/story/ai-researchers-fight-over-four-letters-nips/.

Søraa, Roger Andre. 2017. “Mechanical Genders: How Do Humans Gender Robots?” Gender, Technology, and Development 21, no. 1–2: 99–115.

Wosk, Julie. 2015. My Fair Ladies: Female Robots, Androids, and Other Artificial Eves. New Brunswick, NJ: Rutgers University Press.



Artificial Intelligence - What Is Explainable AI Or XAI?

 




AI that can be explained Explainable AI (XAI) refers to approaches or design decisions used in automated systems such that artificial intelligence and machine learning produce outputs with a logic that humans can understand and explain.




The extensive usage of algorithmically assisted decision-making in social situations has raised considerable concerns about the possibility of accidental prejudice and bias being encoded in the choice.




Furthermore, the application of machine learning in domains that need a high degree of accountability and transparency, such as medicine or law enforcement, emphasizes the importance of outputs that are easy to understand.

The fact that a human operator is not involved in automated decision-making does not rule out the possibility of human bias being embedded in the outcomes produced by machine computation.




Artificial intelligence's already limited accountability is exacerbated by the lack of due process and human logic.




The consequences of algorithmically driven processes are often so complicated that even their engineering designers are unable to understand or predict them.

The black box of AI is a term that has been used to describe this situation.

To address these flaws, the General Data Protection Regulation (GDPR) of the European Union contains a set of regulations that provide data subjects the right to an explanation.

Article 22, which deals with automated individual decision-making, and Articles 13, 14, and 15, which deal with transparency rights in relation to automated decision-making and profiling, are the ones to look out for.


When a decision based purely on automated processing has "legal implications" or "similarly substantial" effects on a person, Article 22 of the GDPR reserves a "right not to be subject to a decision based entirely on automated processing" (GDPR 2016).





It also provides three exceptions to this right, notably when it is required for a contract, when a member state of the European Union has approved a legislation establishing an exemption, or when a person has expressly accepted to algorithmic decision-making.

Even if an exemption to Article 22 applies, the data subject has the right to "request human involvement on the controller's side, to voice his or her point of view, and to challenge the decision" (GDPR 2016).





Articles 13 through 15 of the GDPR provide a number of notification rights when personal data is obtained (Article 13) or from third parties (Article 14), as well as the ability to access such data at any time (Article 15), including "meaningful information about the logic involved" (GDPR 2016).

Recital 71 protects the data subject's right to "receive an explanation of the conclusion taken following such evaluation and to contest the decision" where an automated decision is made that has legal consequences or has a comparable impact on the person (GDPR 2016).





Recital 71 is not legally binding, but it does give advice on how to interpret relevant provisions of the GDPR.

The question of whether a mathematically interpretable model is sufficient to account for an automated judgment and provide transparency in automated decision-making is gaining traction.

Ex-ante/ex-post auditing is an alternative technique that focuses on the processes around machine learning models rather than the models themselves, which may be incomprehensible and counterintuitive.


Jai Krishna Ponnappan


You may also want to read more about Artificial Intelligence here.


See also: 


Algorithmic Bias and Error; Deep Learning.


Further Reading:


Brkan, Maja. 2019. “Do Algorithms Rule the World? Algorithmic Decision-Making in the 

Framework of the GDPR and Beyond.” International Journal of Law and Information Technology 27, no. 2 (Summer): 91–121.

GDPR. 2016. European Union. https://gdpr.eu/.

Goodman, Bryce, and Seth Flaxman. 2017. “European Union Regulations on Algorithmic Decision-Making and a ‘Right to Explanation.’” AI Magazine 38, no. 3 (Fall): 50–57.

Kaminski, Margot E. 2019. “The Right to Explanation, Explained.” Berkeley Technology Law Journal 34, no. 1: 189–218.

Karanasiou, Argyro P., and Dimitris A. Pinotsis. 2017. “A Study into the Layers of Automated Decision-Making: Emergent Normative and Legal Aspects of Deep Learn￾ing.” International Review of Law, Computers & Technology 31, no. 2: 170–87.

Selbst, Andrew D., and Solon Barocas. 2018. “The Intuitive Appeal of Explainable Machines.” Fordham Law Review 87, no. 3: 1085–1139.



Artificial Intelligence - What Are Clinical Decision Support Systems?

 


In patient-physician contacts, decision-making is a critical activity, with judgements often based on partial and insufficient patient information.

In principle, physician decision-making, which is undeniably complicated and dynamic, is hypothesis-driven.

Diagnostic intervention is based on a hypothetically deductive process of testing hypotheses against clinical evidence to arrive at conclusions.

Evidence-based medicine is a method of medical practice that incorporates individual clinical skill and experience with the best available external evidence from scientific literature to enhance decision-making.

Evidence-based medicine must be based on the highest quality, most trustworthy, and systematic data available.

The important issues remain, knowing that both evidence-based medicine and clinical research are required, but that none is perfect: How can doctors get the most up-to-date scientific evidence? What constitutes the best evidence? How may doctors be helped to decide whether external clinical evidence from systematic research should have an impact on their practice? A hierarchy of evidence may help you figure out which sorts of evidence are more likely to produce reliable answers to clinical problems if done correctly.

Despite the lack of a broadly agreed hierarchy of evidence, Alba DiCenso et al. (2009) established the 6S Hierarchy of Evidence-Based Resources as a framework for classifying and selecting resources that assess and synthesize research results.

The 6S pyramid was created to help doctors and other health-care professionals make choices based on the best available research data.

It shows a hierarchy of evidence in which higher levels give more accurate and efficient forms of information.

Individual studies are at the bottom of the pyramid.

Although they serve as the foundation for research, a single study has limited practical relevance for practicing doctors.

Clinicians have been taught for years that randomized controlled trials are the gold standard for making therapeutic decisions.

Researchers may use randomized controlled trials to see whether a treatment or intervention is helpful in a particular patient population, and a strong randomized controlled trial can overturn years of conventional wisdom.

Physicians, on the other hand, care more about whether it will work for their patient in a specific situation.

A randomized controlled study cannot provide this information.

A research synthesis may be considered of as a study of studies, since it reflects a greater degree of evidence than individual studies.

It makes conclusions about a practice's efficacy by carefully examining evidence from various experimental investigations.

Systematic reviews and meta-analyses, which are often seen as the pillars of evidence-based medicine, have their own set of issues and rely on rigorous evaluation of the features of the available data.

The problem is that most doctors are unfamiliar with the statistical procedures used in a meta-analysis and are uncomfortable with the fundamental scientific ideas needed to evaluate data.

Clinical practice recommendations are intended to bridge the gap between research and existing practice, reducing unnecessary variation in practice.

In recent years, the number of clinical practice recommendations has exploded.

The development process is largely responsible for the guidelines' credibility.

The most serious problem is the lack of scientific evidence that these clinical practice guidelines are based on.

They don't all have the same level of quality and trustworthiness in their evidence.

The search for evidence-based resources should start at the top of the 6S pyramid, at the systems layer, which includes computerized clinical decision support systems.

Computerized clinical decision support systems (also known as intelligent medical platforms) are health information technology-based software that builds on the foundation of an electronic health record to provide clinicians with intelligently filtered and organized general and patient-specific information to improve health and clinical care.

Laboratory measurements, for example, are often color-coded to show whether they lie inside or outside of a reference range.

The computerized clinical decision support systems that are now available are not a simple model that produces just an output.

Multiple phases are involved in the interpretation and use of a computerized clinical decision support system, including displaying the algorithm output in a specified fashion, the clinician's interpretation, and finally the medical decision.

Despite the fact that computerized clinical decision support systems have been proved to minimize medical mistakes and enhance patient outcomes, user acceptability has prevented them from reaching their full potential.

Aside from the interface problems, doctors are wary about computerized clinical decision support systems because they may limit their professional autonomy or be utilized in the case of a medical-legal dispute.

Although computerized clinical decision support systems still need human participation, some critical sectors of medicine, such as cancer, cardiology, and neurology, are adopting artificial intelligence-based diagnostic tools.

Machine learning methods and natural language processing systems are the two main groups of these instruments.

Patients' data is used to construct a structured database for genetic, imaging, and electrophysiological records, which is then analyzed for a diagnosis using machine learning methods.

To assist the machine learning process, natural language processing systems construct a structured database utilizing clinical notes and medical periodicals.

Furthermore, machine learning algorithms in medical applications seek to cluster patients' features in order to predict the likelihood of illness outcomes and offer a prognosis to the clinician.

Several machine learning and natural language processing technologies have been coupled to produce powerful computerized clinical decision support systems that can process and offer diagnoses as well as or better than doctors.

When it came to detecting lymph node metastases, a Google-developed AI approach called convolutional neural networking surpassed pathologists.

In compared to pathologists, who had a sensitivity of 73 percent, the convolutional neural network was sensitive 97 percent of the time.

Furthermore, when the same convolutional neural network was used to classify skin cancers, it performed at a level comparable to dermatologists (Krittanawong 2018).

Depression is also diagnosed and classified using such approaches.

By merging artificial intelligence's capability with human views, empathy, and experience, physicians' potential will be increased.

The advantages of advanced computerized clinical decision support systems, on the other hand, are not limited to diagnoses and classification.

By reducing processing time and thus improving patient care, computerized clinical decision support systems can be used to improve communication between physicians and patients.

To avoid drug-drug interactions, computerized clinical decision support systems can prioritize medication prescription for patients based on their medical history.

More importantly, by extracting past medical history and using patient symptoms to determine whether the patient should be referred to urgent care, a specialist, or a primary care doctor, computerized clinical decision support systems equipped with artificial intelligence can aid triage diagnosis and reduce triage processing times.

Because they are the primary causes of mortality in North America, developing artificial intelligence around these acute and highly specialized medical problems is critical.

Artificial intelligence has also been used in other ways with computerized clinical decision support systems.

The studies of Long et al. (2017), who used ocular imaging data to identify congenital cataract illness, and Gulshan et al.

(2016), who used retinal fundus pictures to detect referable diabetic retinopathy, are two recent instances.

Both stories show how artificial intelligence is growing exponentially in the medical industry and how it may be used in a variety of ways.

Although computerized clinical decision support systems hold great promise for facilitating evidence-based medicine, much work has to be done to reach their full potential in health care.

The growing familiarity of new generations of doctors with sophisticated digital technology may encourage the usage and integration of computerized clinical decision support systems.

Over the next decade, the market for such systems is expected to expand dramatically.

The pressing need to lower the prevalence of drug mistakes and worldwide health-care expenditures is driving this expansion.

Computerized clinical decision support systems are the gold standard for assisting and supporting physicians in their decision-making.

In order to benefit doctors, patients, health-care organizations, and society, the future should include more advanced analytics, automation, and a more tailored interaction with the electronic health record. 



~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.


See also: 

Automated Multiphasic Health Testing; Expert Systems; Explainable AI; INTERNIST-I and QMR.


Further Reading

Arnaert, Antonia, and Norma Ponzoni. 2016. “Promoting Clinical Reasoning Among Nursing Students: Why Aren’t Clinical Decision Support Systems a Popular Option?” Canadian Journal of Nursing Research 48, no. 2: 33–34.

Arnaert, Antonia, Norma Ponzoni, John A. Liebert, and Zoumanan Debe. 2017. “Transformative Technology: What Accounts for the Limited Use of Clinical Decision Support Systems in Nursing Practice?” In Health Professionals’ Education in the Age of Clinical Information Systems, Mobile Computing, and Social Media, edited by Aviv Shachak, Elizabeth M. Borycki, and Shmuel P. Reis, 131–45. Cambridge, MA: Academic Press.

DiCenso, Alba, Liz Bayley, and R. Brian Haynes. 2009. “Accessing Preappraised Evidence: Fine-tuning the 5S Model into a 6S Model.” ACP Journal Club 151, no. 6 (September): JC3-2–JC3-3.

Gulshan, Varun, et al. 2016. “Development and Validation of a Deep Learning Algorithm for Detection of Diabetic Retinopathy in Retinal Fundus Photographs.” JAMA 316, no. 22 (December): 2402–10.

Krittanawong, Chayakrit. 2018. “The Rise of Artificial Intelligence and the Uncertain Future for Physicians.” European Journal of Internal Medicine 48 (February): e13–e14.

Long, Erping, et al. 2017. “An Artificial Intelligence Platform for the Multihospital Collaborative Management of Congenital Cataracts.” Nature Biomedical Engineering 1, no. 2: n.p.

Miller, D. Douglas, and Eric W. Brown. 2018. “Artificial Intelligence in Medical Practice: The Question to the Answer?” American Journal of Medicine 131, no. 2: 129–33.


Artificial Intelligence - What Is Algorithmic Error and Bias?

 




Bias in algorithmic systems has emerged as one of the most pressing issues surrounding artificial intelligence ethics.

Algorithmic bias refers to a computer system's recurrent and systemic flaws that discriminate against certain groups or people.

It's crucial to remember that bias isn't necessarily a bad thing: it may be included into a system in order to fix an unjust system or reality.

Bias causes problems when it leads to an unjust or discriminating conclusion that affects people's lives and chances.

Individuals and communities that are already weak in society are often at danger from algorithmic prejudice and inaccuracy.

As a result, algorithmic prejudice may exacerbate social inequality by restricting people's access to services and goods.

Algorithms are increasingly being utilized to guide government decision-making, notably in the criminal justice sector for sentencing and bail, as well as in migration management using biometric technology like face and gait recognition.

When a government's algorithms are shown to be biased, individuals may lose faith in the AI system as well as its usage by institutions, whether they be government agencies or private businesses.

There have been several incidents of algorithmic prejudice during the past few years.

A high-profile example is Facebook's targeted advertising, which is based on algorithms that identify which demographic groups a given advertisement should be viewed by.

Indeed, according to one research, job advertising for janitors and related occupations on Facebook are often aimed towards lower-income groups and minorities, while ads for nurses or secretaries are focused at women (Ali et al. 2019).

This involves successfully profiling persons in protected classifications, such as race, gender, and economic bracket, in order to maximize the effectiveness and profitability of advertising.

Another well-known example is Amazon's algorithm for sorting and evaluating resumes in order to increase efficiency and ostensibly impartiality in the recruiting process.

Amazon's algorithm was trained using data from the company's previous recruiting practices.

However, once the algorithm was implemented, it became evident that it was prejudiced against women, with résumés that contained the terms "women" or "gender" or indicated that the candidate had attended a women's institution receiving worse rankings.

Little could be done to address the algorithm's prejudices since it was trained on Amazon's prior recruiting practices.

While the algorithm was plainly prejudiced, this example demonstrates how such biases may mirror social prejudices, including, in this instance, Amazon's deeply established biases against employing women.

Indeed, bias in an algorithmic system may develop in a variety of ways.

Algorithmic bias occurs when a group of people and their lived experiences are not taken into consideration while the algorithm is being designed.

This can happen at any point during the algorithm development process, from collecting data that isn't representative of all demographic groups to labeling data in ways that reproduce discriminatory profiling to the rollout of an algorithm that ignores the differential impact it may have on a specific group.

In recent years, there has been a proliferation of policy documents addressing the ethical responsibilities of state and non-state bodies using algorithmic processing—to ensure against unfair bias and other negative effects of algorithmic processing—partly in response to significant publicity of algorithmic biases (Jobin et al.2019).

The European Union's "Ethics Guidelines for Trustworthy AI," issued in 2018, is one of the most important rules in this area.

The EU statement lays forth seven principles for fair and ethical AI and algorithmic processing regulation.

Furthermore, with the adoption of the General Data Protection Regulation (GDPR) in 2018, the European Union has been in the forefront of legislative responses to algorithmic processing.

A corporation may be penalized up to 4% of its annual worldwide turnover if it uses an algorithm that is found to be prejudiced on the basis of race, gender, or another protected category, according to the GDPR, which applies in the first instance to the processing of all personal information inside the EU.

The difficulty of determining where a bias occurred and what dataset caused prejudice is a persisting challenge for algorithmic processing regulation.

This is sometimes referred to as the algorithmic black box problem: an algorithm's deep data processing layers are so intricate and many that a human cannot comprehend them.

Different data is fed into the algorithm to observe where the unequal results emerge, based on the right to an explanation when, subject to an automated decision under the GDPR, one of the replies has been to identify where the bias occurred via counterfactual explanations (Wachter et al.2018).

Technical solutions to the issue included building synthetic datasets that seek to repair naturally existing biases in datasets or provide an unbiased and representative dataset, in addition to legal and legislative instruments for tackling algorithmic bias.

While such channels for redress are vital, one of the most comprehensive solutions to the issue is to have far more varied human teams developing, producing, using, and monitoring the effect of algorithms.

A mix of life experiences within diverse teams makes it more likely that prejudices will be discovered and corrected sooner.


~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.



See also: Biometric Technology; Explainable AI; Gender and AI.

Further Reading

Ali, Muhammed, Piotr Sapiezynski, Miranda Bogen, Aleksandra Korolova, Alan Mislove, and Aaron Rieke. 2019. “Discrimination through Optimization: How Facebook’s Ad Delivery Can Lead to Skewed Outcomes.” In Proceedings of the ACM on Human-Computer Interaction, vol. 3, CSCW, Article 199 (November). New York: Association for Computing Machinery.

European Union. 2018. “General Data Protection Regulation (GDPR).” https://gdpr-info.eu/.

European Union. 2019. “Ethics Guidelines for Trustworthy AI.” https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai.

Jobin, Anna, Marcello Ienca, and Effy Vayena. 2019. “The Global Landscape of AI Ethics Guidelines.” Nature Machine Intelligence 1 (September): 389–99.

Noble, Safiya Umoja. 2018. Algorithms of Oppression: How Search Engines Reinforce Racism. New York: New York University Press.

Pasquale, Frank. 2016. The Black Box Society: The Secret Algorithms that Control Money and Information. Cambridge, MA: Harvard University Press.

Wachter, Sandra, Brent Mittelstadt, and Chris Russell. 2018. “Counterfactual Explanations Without Opening the Black Box: Automated Decisions and the GDPR.” Harvard Journal of Law & Technology 31, no. 2 (Spring): 841–87.

Zuboff, Shoshana. 2018. The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. London: Profile Books.




Artificial Intelligence - How Are Accidents and Risk Assessment Done Using AI?

 



Many computer-based systems' most significant feature is their reliability.

Physical damage, data loss, economic disruption, and human deaths may all result from mechanical and software failures.

Many essential systems are now controlled by robotics, automation, and artificial intelligence.

Nuclear power plants, financial markets, social security payments, traffic lights, and military radar stations are all under their watchful eye.

High-tech systems may be designed purposefully hazardous to people, as with Trojan horses, viruses, and spyware, or they can be dangerous due to human programming or operation errors.

They may become dangerous in the future as a result of purposeful or unintended actions made by the machines themselves, or as a result of unanticipated environmental variables.

The first person to be murdered while working with a robot occurred in 1979.

A one-ton parts-retrieval robot built by Litton Industries hit Ford Motor Company engineer Robert Williams in the head.

After failing to entirely switch off a malfunctioning robot on the production floor at Kawasaki Heavy Industries two years later, Japanese engineer Kenji Urada was murdered.

Urada was shoved into a grinding machine by the robot's arm.

Accidents do not always result in deaths.

A 300-pound Knightscope K5 security robot on patrol at a retail business center in Northern California, for example, knocked down a kid and ran over his foot in 2016.

Only a few cuts and swelling were sustained by the youngster.

The Cold War's history is littered with stories of nuclear near-misses caused by faulty computer technology.

In 1979, a computer glitch at the North American Aerospace Defense Command (NORAD) misled the Strategic Air Command into believing that the Soviet Union had fired over 2,000 nuclear missiles towards the US.

An examination revealed that a training scenario had been uploaded to an active defense computer by mistake.

In 1983, a Soviet military early warning system identified a single US intercontinental ballistic missile launching a nuclear assault.

Stanislav Petrov, the missile defense system's operator, correctly discounted the signal as a false alarm.

The reason of this and subsequent false alarms was ultimately discovered to be sunlight hitting high altitude clouds.

Petrov was eventually punished for humiliating his superiors by disclosing faults, despite preventing global thermonuclear Armageddon.

The so-called "2010 Flash Crash" was caused by stock market trading software.

In slightly over a half-hour on May 6, 2010, the S&P 500, Dow Jones, and NASDAQ stock indexes lost—and then mainly regained—a trillion dollars in value.

Navin Dal Singh Sarao, a U.K. trader, was arrested after a five-year investigation by the US Department of Justice for allegedly manipulating an automated system to issue and then cancel huge numbers of sell orders, allowing his business to acquire equities at temporarily reduced prices.

In 2015, there were two more software-induced market flash crashes, and in 2017, there were flash crashes in the gold futures market and digital cryptocurrency sector.

Tay (short for "Thinking about you"), a Microsoft Corporation artificial intelligence social media chatterbot, went tragically wrong in 2016.

Tay was created by Microsoft engineers to imitate a nineteen-year-old American girl and to learn from Twitter discussions.

Instead, Tay was trained to use harsh and aggressive language by internet trolls, which it then repeated in tweets.

After barely sixteen hours, Microsoft deleted Tay's account.

More AI-related accidents in motor vehicle operating may occur in the future.

In 2016, the first fatal collision involving a self-driving car happened when a Tesla Model S in autopilot mode collided with a semi-trailer crossing the highway.

The motorist may have been viewing a Harry Potter movie on a portable DVD player when the accident happened, according to witnesses.

Tesla's software does not yet allow for completely autonomous driving, hence a human operator is required.

Despite these dangers, one management consulting company claims that autonomous automobiles might avert up to 90% of road accidents.

Artificial intelligence security is rapidly growing as a topic of cybersecurity study.

Militaries all around the globe are working on prototypes of dangerous autonomous weapons systems.

Automatic weapons, such as drones, that now rely on a human operator to make deadly force judgments against targets, might be replaced with automated systems that make life and death decisions.

Robotic decision-makers on the battlefield may one day outperform humans in extracting patterns from the fog of war and reacting quickly and logically to novel or challenging circumstances.

High technology is becoming more and more important in modern civilization, yet it is also becoming more fragile and prone to failure.

An inquisitive squirrel caused the NASDAQ's primary computer to collapse in 1987, bringing one of the world's major stock exchanges to its knees.

In another example, the ozone hole above Antarctica was not discovered for years because exceptionally low levels reported in data-processed satellite images were assumed to be mistakes.

It's likely that the complexity of autonomous systems, as well as society's reliance on them under quickly changing circumstances, will make completely tested AI unachievable.

Artificial intelligence is powered by software that can adapt to and interact with its surroundings and users.

Changes in variables, individual acts, or events may have unanticipated and even disastrous consequences.

One of the dark secrets of sophisticated artificial intelligence is that it is based on mathematical approaches and deep learning algorithms that are so complicated that even its creators are baffled as to how it makes accurate conclusions.

Autonomous cars, for example, depend on exclusively computer-written instructions while they watch people driving in real-world situations.

But how can a self-driving automobile learn to anticipate the unexpected?

Will attempts to adjust AI-generated code to decrease apparent faults, omissions, and impenetrability lessen the likelihood of unintended negative consequences, or will they merely magnify existing problems and produce new ones? Although it is unclear how to mitigate the risks of artificial intelligence, it is likely that society will rely on well-established and presumably trustworthy machine-learning systems to automatically provide rationales for their actions, as well as examine newly developed cognitive computing systems on our behalf.


~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.



Also see: Algorithmic Error and Bias; Autonomy and Complacency; Beneficial AI, Asilo mar Meeting on; Campaign to Stop Killer Robots; Driverless Vehicles and Liability; Explainable AI; Product Liability and AI; Trolley Problem.


Further Reading

De Visser, Ewart Jan. 2012. “The World Is Not Enough: Trust in Cognitive Agents.” Ph.D. diss., George Mason University.

Forester, Tom, and Perry Morrison. 1990. “Computer Unreliability and Social Vulnerability.” Futures 22, no. 5 (June): 462–74.

Lee, John D., and Katrina A. See. 2004. “Trust in Automation: Designing for Appropriate Reliance.” Human Factors 46, no. 1 (Spring): 50–80.

Yudkowsky, Eliezer. 2008. “Artificial Intelligence as a Positive and Negative Factor in Global Risk.” In Global Catastrophic Risks, edited by Nick Bostrom and Milan 

M. Ćirković, 308–45. New York: Oxford University Press.



Analog Space Missions: Earth-Bound Training for Cosmic Exploration

What are Analog Space Missions? Analog space missions are a unique approach to space exploration, involving the simulation of extraterrestri...