Connect with us

Brand Stories

Why it is extremely difficult to fix accountability in Artificial Intelligence systems

Published

on


AI systems are extremely powerful and are reshaping our lives. They are turning out to be extremely useful to us. AI systems are being used everywhere. In customer support for an online retailer in form of a chatbot, in social media to provide suggested audio or video to the user etc. AI systems are also being used to sanction loans to customers by banks or to hire employees by private and government agencies. But at the same time; they pose dangers.

For instance; the company Amazon abandoned an AI system after finding that it was unfairly discriminating potential employees based on gender and social background. Whenever AI systems fail; it is extremely difficult to pinpoint the root cause of the problem. AI systems are like a black box and their inner workings are difficult to find out even by the algorithm developer who developed it.

So it is easy to attach accountability when a software system fails. But why it is so difficult to blame an AI system when it fails?

Accountability case for software systems

Suppose a software program is built to calculate electricity charges for a user who is a customer of an electricity distribution company. Due to faulty logic in the software program; the electricity consumption calculation has been reported wrongly. After realising the mistake; the electricity distribution company approached the developer who built the software program. The developer investigated the matter immediately. It is a well known fact that problem could be found either in the data or in the business logic.

The developer first searched the database for a specific line item. They found that the data looked to be correct. So they investigated the source code. They found the problem was there. The data was saved in the database as a floating point number up to 2 decimal points. For instance; in this case, the consumption was 100.20 units of energy consumption for the month. But in the source code; the calculation was done as a whole number with no decimal point. So the source code was calculating energy consumption as 100 units and not 100.20 units. Once the problem was found; it was fixed immediately. The source code was changed to accommodate floating number for energy consumption.

Why AI systems are black boxes?

But what about an AI system? In an AI system; there is no data. An AI system is trained on data and then tested with testing data. Once training and testing is done then the AI system does computing based on the memory which was saved after training and testing. Now if this AI system was asked to calculate energy consumption calculation then it will do so by using its trained memory and not from any data saved in a database. Suppose the AI system is doing wrong energy consumption calculation then how can you find where the problem is? It is almost impossible to do that. If there was an issue with the training or testing data or the algorithm? Difficult nut to crack!

How to establish transparency in AI systems?

Transparency in a software system is visible and so it is easy to establish accountability. But not in any AI system. Now the question is: how to establish transparency in an AI system so that accountability can be established?

The first nut to crack to achieve transparency is to look into the data used to train and test the AI model. If the data is not clean or contains bias then the AI model will generate wrong results. The next guy in line to be checked is the algorithm used to develop the AI model. Now understanding the algorithm is difficult to understand even by the developers. So good documentation written in easy to understand language must be maintained so that even non technical people can also understand as to how the AI model does calculations or makes decisions.

Explainability is a must for all AI models. If documentation is not proper then it will be difficult to trace the root cause of any wrong decisions or calculation made by an AI model.

Once an AI model has 100% transparency only then establishing accountability will be possible. 



Linkedin


Disclaimer

Views expressed above are the author’s own.



END OF ARTICLE





Source link

Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Brand Stories

Americans May Have To Pay Much More For Electricity. Reason: Artificial Intelligence

Published

on

By


Artificial intelligence is reshaping the future — but not without a cost. A new report by the White House Council of Economic Advisors warns that AI and cloud computing may drive up electricity prices dramatically across the United States unless urgent investments are made in power infrastructure.

The study highlights a significant shift: after decades of minimal electricity demand growth, 2024 alone saw a 2% rise, largely attributed to the surge in AI-powered data centers. The International Energy Agency (IEA) projects that by 2030, data centers in the US could consume more electricity than the combined output of heavy industries such as aluminum, steel, cement, and chemicals.

Productivity Promises VS Power Pressures

Despite the looming challenges, the report does not discount AI’s potential benefits. If half of all US businesses adopt AI by 2034, labor productivity could rise by 1.5 percentage points annually, potentially boosting GDP growth by 0.4% that year. But that promise comes with a price.

To meet the surge in demand, especially when factoring in industrial electrification and efforts to reshore manufacturing, the US would need to invest an estimated 1.4 trillion Dollars between 2025 and 2030 in new electricity generation. That figure surpasses the industry’s investment over the past decade. The study cautions that without the emergence of lower-cost power providers — such as renewables or advanced nuclear — electricity bills will rise sharply.



Source link

Continue Reading

Brand Stories

Delaware Firm to Evolve Defense Tech Org With Self-Growing AI

Published

on


Star26 Capital Inc. is collaborating with Delaware-based Synthetic Darwin to supercharge its defense tech developments through self-growing AI.

This partnership will utilize Darwinslab, an AI ecosystem where digital agents generate, assess, and cultivate other algorithms inspired by biological evolution.

The solution slashes the time needed to build or sustain complex AI systems, shrinking development cycles to days and enabling rapid adaptation to new data and mission needs.

Read the full story on our new publication, Military AI: Delaware Firm to Evolve New York Defense Tech Org Through Self-Growing AI



Source link

Continue Reading

Brand Stories

AI isn’t just for coders: 7 emerging non-tech career paths in artificial intelligence

Published

on

By


7 emerging non-tech career paths in artificial intelligence

Artificial intelligence is no longer the future. It’s already shaping how we live, work, and learn. From smart assistants to personalised learning apps and automated hiring tools, AI is now part of everyday life. But here’s something many students still don’t realise — you don’t have to be a computer science genius to build a meaningful career in AI.In 2025, AI needs more than just coders. It needs people who understand ethics, design, communication, psychology, policy, and human behaviour. Whether you’re studying law, liberal arts, design, economics, or media, there is space for you in this fast-growing field. These emerging roles are all about making AI more responsible, more human, and more useful.Here are seven exciting non-tech career paths in artificial intelligence that you can start exploring now.

AI ethics specialist

AI systems make decisions that can affect real lives — from who gets hired to who receives a loan. That’s why companies and governments need experts who can guide them on what’s fair, what’s biased, and what crosses a line. Ethics specialists work closely with developers, legal teams, and product leaders to make sure AI is built and used responsibly.Best suited for: Students from philosophy, sociology, law, or political science backgroundsWhere to work: Tech companies, research institutes, policy think tanks, or digital rights NGOs

AI UX and UI designer

AI tools need to be easy to use, intuitive, and accessible. That’s where design comes in. AI UX and UI designers focus on creating smooth, human-centered experiences, whether it’s a chatbot, a virtual assistant, or a smart home interface. They use design thinking to make sure AI works well for real users.Best suited for: Students of psychology, graphic design, human-computer interaction, or visual communicationWhere to work: Tech startups, health-tech and ed-tech platforms, voice and interface design labs

AI policy analyst

AI raises big questions about privacy, rights, and regulation. Governments and organisations are racing to create smart policies that balance innovation with safety. AI policy analysts study laws, write guidelines, and advise decision-makers on how to manage the impact of AI in sectors like education, defense, healthcare, and finance.Best suited for: Public policy, law, international relations, or development studies studentsWhere to work: Government agencies, global institutions, research bodies, and policy units within companies

AI behavioural researcher

AI tools influence human behaviour — from how long we scroll to what we buy. Behavioural researchers look at how people respond to AI and what changes when technology gets smarter. Their insights help companies design better products and understand the social effects of automation and machine learning.Best suited for: Students of psychology, behavioural economics, sociology, or educationWhere to work: Tech companies, research labs, social impact startups, or mental health platforms

AI content strategist and explainer

AI is complex, and most people don’t fully understand it. That’s why companies need writers, educators, and content creators who can break it down. Whether it’s writing onboarding guides for AI apps or creating videos that explain how algorithms work, content strategists make AI easier to understand for everyday users.Best suited for: Students of journalism, English, media studies, marketing, or communicationWhere to work: Ed-tech and SaaS companies, AI product teams, digital agencies, or NGOs

AI program manager

This role is perfect for big-picture thinkers who love connecting people, processes, and purpose. Responsible AI program managers help companies build AI that meets ethical, legal, and user standards. They coordinate between tech, legal, and design teams and ensure that AI development stays aligned with values and global standards.Best suited for: Business, liberal arts, management, or public administration studentsWhere to work: Large tech firms, AI consultancies, corporate ethics teams, or international development agencies

AI research associate (non-technical)

Not all AI research is about coding. Many labs focus on the social, psychological, or economic impact of AI. As a research associate, you could be studying how AI affects jobs, education, privacy, or cultural behaviour. Your work might feed into policy, academic papers, or product design.Best suited for: Students from linguistics, anthropology, education, economics, or communication studiesWhere to work: Universities, research labs, global think tanks, or ethics institutesThe world of AI is expanding rapidly, and it’s no longer just about math, code, and machines. It’s also about people, systems, ethics, and storytelling. If you’re a student with curiosity, critical thinking skills, and a passion for meaningful work, there’s a place for you in AI — even if you’ve never opened a programming textbook.TOI Education is on WhatsApp now. Follow us here.





Source link

Continue Reading

Trending

Copyright © 2025 AISTORIZ. For enquiries email at prompt@travelstoriz.com