- Published on
AI and Accountability
- Authors
- Name
- Rukia Nur
Background
With the emergence and growing popularity of AI, our society has been presented with numerous opportunities and challenges. Beyond potential applications in, ethical considerations must be made if we are to ensure the appropriate use of such a technology. Too often, artificial intelligence is viewed as a panacea, with little regard for ethical concerns influenced by a philosophy of 'move fast and break things'. Nevertheless, when it comes to the use of AI, human accountability must still be demanded because the judgments and decisions will have and are already having real-world repercussions. AI if not handled carefully will only increase the bias and prejudice present in our societies and civilizations,which is why we must fully appreciate this plausibility and work to establish accountable and transparent systems.
Data as mirrors
It's critical to recognize and correct bias (in this paper bias is meant in the social justice sense and not as a statistical concept) in data. Recognizing the sources of bias in our data can help us spot and eliminate them before they affect our models or conclusions. For example, if researchers fail to account for historical discrimination when collecting data, their resulting model will be biased against certain individual. It's natural and even expected that samples will never capture a population exactly however its important that engineers proactively ensure that data models display as little bias as possible. Responsible data managers should be aware of potential biases and strive to counteract them through appropriate techniques (ie. oversampling, stratified sampling.) In the article Responsible Data Management the authors reflect
Data is a mirror reflection of the world. When we think about preexisting bias in the data, we interrogate this reflection, which is often distorted. One possible reason is that the mirror (the measurement process) introduces distortions. It faithfully represents some portions of the world, while amplifying or diminishing others. Another possibility is that even a perfect mirror can only reflect a distorted world—a world such as it is, and not as it could or should be.
Data-oriented approaches to understanding reality are limited, as it can't provide insight into the circumstances or context in which it was created. Like a mirror wouldn't understand why its reflection may appear distorted, data cannot explain the context in which it was generated and should be interpreted. Like the article pontificates it'd be silly to expect data to have knowledge of itself. Even if we're to collect more data to balance the scales, an algorithm would still be incapable of deciding what's fair and just. A dataset offers one particular view of reality without addressing alternatives . We must consider additional variables and points of reference for a more comprehensive understanding.
Discriminatory AI Models
Every model is discriminatory (ie. In the statistical sense of being able to differentiate.) When we’re trying to optimize against a specific objective function, it makes every effort to discriminate against all possible dimensions available in a dataset. This is true for both complex deep neural networks and basic regression trees, as well as heuristic models. We can sometimes discriminate excessively and be ineffective in real-world applications, indicating that theres overfitting in our dataset. Consequently , a hold-out test set is used to cross-validate the results and guarantee that it can generalize more generally in IRL scenarios. Nonetheless, if left unconstrained, a model will follow the single aim for which it was designed, discriminating against all traits to maximize the objective function.
Because algorithms are intrinsically discriminatory, we as thoughtful humans must decide what can and should be discriminated against and what must not. We know humans are capable of discrimination. However, w hen it comes to making such discriminations explicit components of how 'things just are,' there is just apprehension we cannot and must not build tomorrow on the frameworks of yesterday. We handicap innovation when we fail to account for the biases present in our data.
Ethics and Fairness
It's imperative to consider the potential implications of AI on our societal structures, as well as its influence on our decaying environment. When we're evaluating these frameworks to address and judge AI ethics it's important to keep in mind that while it's no topic of debate that AI has the potential to change countless aspects of our lives, it also has the potential to cause immeasurable harm if it's not wielded responsibly.
The first step in developing an ethical framework would be identifying the ethical principles that should be utilized, thus they will likely be built and utilized on a case by case basis and should be based on the values of the society within which the AI is being used, as well as ensuring the models are designed to protect individuals, the environment's, and human rights first and foremost. Privacy, autonomy, and fairness, for example, should all be considered in developing our framework.
Once we outline the ethical standards for the circumstance, our next step would be to develop guidelines for how these values should be applied in practice. These standards should be tailored to the specific environment in which the AI model is being utilized to ensure that its employed responsibly and ethically. Our standards should ensure that people are protected and that models are not used to discriminate against specific groups of people.
It's important that we develop a methodology for monitoring and assessing the e use of AI. This system should include procedures for reporting violations of a established and universally agreed upon ethical framework and should be designed to ensure that ethical principles and guidelines are followed and upheld. This system can also include a procedure for dealing with just about any moral issues that arise(in relation to the model ofc), like maybe offering recompense(as defined by the established standards) for any harm caused by the AI.
Conclusion
The moral ramifications of automated decision-making are examined in Isaac Asimov's novel I Robot. Using this as a starting point for conversation, we can stress that humans should remain in charge of making decisions regarding what's right and wrong. It's absolutely indispensable that we push ourselves to question ourselves how we want to define and understand fairness and justice in each particular case and how our definitions and frameworks will affect people and society at large, rather than relying on statistical standards to define fairness in place of our own judgment, essentially absconding ourselves of any accountability whatsoever.