Big Tech’s Guide to Talking About the Ethics of Artificial Intelligence


Artificial intelligence researchers often say that good machine learning is actually more of an art than a science. The same can be said of effective public relations. Choose the right words for Be positive Or reframe the conversation around AI is a delicate task: Done well, it can boost an individual’s brand image, but if done poorly, it can lead to a greater backlash.

He would know the tech giants. Over the past few years, they have had to learn this art quickly as they have faced growing public distrust of their actions and mounting criticism about their research and techniques in the field of artificial intelligence.

They have now developed a new vocabulary to use when they want to reassure the public that they care deeply about developing AI in a responsible way – but they want to make sure that they are not calling for further scrutiny. Here’s an insider’s guide to decoding their language and challenging hidden assumptions and values.

liability (N) – verb Hold someone else accountable On the consequences of your AI system failure.

Health (N) – technical accuracy. The most important measure of success in evaluating the performance of an AI model. be seen ratification.

Discount (N) – A single engineer who can disrupt your robust income-generating AI system. be seen DurabilityAnd the Safety.

revenge (N) – The challenge of designing AI systems that do what we tell them and appreciate what we value. Intentionally abstract. Avoid using real examples of unintended harmful consequences. be seen safety.

Artificial General Intelligence (Phrase) – A virtual god of artificial intelligence This may be a long way off in the future, but it could be imminent as well. It can be really good or really bad whichever is more useful rhetorically. You are clearly building the best. And it is expensive. Therefore, you need more money. be seen Long-term risks.

Audit (N) – Revision Pay someone else to do your company or AI system so that it appears more transparent without having to change anything. be seen Impact evaluation.

a plus (5) – To increase the productivity of white collar workers. Side effects: Automation of blue-collar jobs. Sad but unavoidable.

useful (description) – A comprehensive descriptor For what you are trying to build. Adequately inconspicuous. be seen the value.

Of your design (ph) – as in “equity by design” or “accountability by design”. A phrase that indicates that you are thinking seriously about important things from the start.

Commitment (N) – the act of following the law. Anything illegal goes.

Data authors (ph) – the people it is alleged to be Behind the Turkish mechanical Amazon interface To do data cleaning work inexpensively. Not sure who they are. You have never met them.

Democratize (5) – To expand technology at any cost. Justification for concentrating resources. be seen Scale.

Diversity, Equality and Inclusion (Ph) – The process of recruiting engineers and researchers from marginalized groups so that you can show them to the public. If they challenge the status quo, Shoot them.

Efficacy (N) – Using less data, memory, personnel or energy to build an AI system.

Ethics Council (ph) – A group of advisors without real authority, come together to create the look your company actively listens to. Examples: Google Artificial Intelligence Ethics Council (Canceled), Facebook Supervisory Board (Still In Stand).

Ethics principles (Ph) – A set of intuitive facts used to indicate your good intentions. Make it upscale. The more vague the language, the better. be seen Responsible Amnesty International.

Explicable (Adjective) – to describe an AI system that you, the developer, and the user can understand. It is difficult to achieve for people who are using it. Maybe not worth the effort. be seen Explicable.

Fairness (N) – complex The idea of ​​impartiality Used to describe algorithms that are unbiased. It can be defined in dozens of ways based on your preferences.

For good (ph) – as in “AI for good” or “Data for good. “ A completely casual initiative of your core business and helps you get good publicity.

Insight (N) – the ability to look into the future. Basically Impossible: So, a completely reasonable explanation as to why you can’t rid your AI system of unintended consequences.

Domain (N) – a set of guidelines for making decisions. A good way to appear thoughtful while delaying actual decision making.

Generalizable (Adjective) – a good AI model sign. One that continues to operate under changing conditions. be seen The real world.

Judgment (N) – bureaucracy.

Human centered design (ph) – A process that involves using “characters” to visualize what an average user would want from your AI system. May involve soliciting feedback from actual users. Only if there is time. be seen the concerned.

The human is in the ring (ph) – anyone who is part of an AI system. Responsibilities range from Frauding system capabilities To ward off accusations of automation.

Impact evaluation (Ph) – Do-it-yourself review of your company or AI system to show you are willing to consider its negatives without changing anything. be seen Audit.

Explicable (Adjective) – Description of an AI system that a developer can follow his account step-by-step to understand how he arrived at his answer. Actually it might just be a linear regression. Artificial intelligence looks better.

integrity (N) – Issues that undermine the technical performance of your model or your company’s ability to scale. It is not to be confused with issues that are harmful to society. Not to be confused with honesty.

Interdisciplinary (Adjective) – the term used for any team or project that includes people who do not symbolize: user researchers, product managers, ethical philosophers especially moral philosophers.

Long-term risks (N) – bad things that could have disastrous effects in the distant future. It probably never will, but study and avoidance is more important than direct harm to current AI systems.

partners (N) – Other elite groups that share your point of view and can work with you to preserve the status quo. be seen the concerned.

Privacy swaps (Ph) – The noble sacrifice of individual control over personal information for collective benefits such as the advancement of AI-driven healthcare, which also happens to be very profitable.

Progress (N) – scientific and technological progress. Inherent good.

The real world (ph) – the opposite of the simulated realm. A dynamic physical environment full of unexpected surprises in which AI models are trained to survive. It should not be confused with humans and society.

Regulation (N) – What you require To transfer the responsibility for mitigating the harmful impact of AI to policymakers. It is not to be confused with policies that will stunt your growth.

Responsible Amnesty International (Noun) – a title for any business in your company Can be explained By the public as a sincere effort to mitigate the damage of your AI systems.

Durability (N) – The ability of the AI ​​model to operate consistently and accurately under the outrageous attempts Feed it damaged data.

safety (N) – The challenge of building AI systems that do not conflict with the designer’s intentions. It is not to be confused with building AI systems that do not fail. be seen revenge.

Scale (N) – the actual end that any good AI system should strive to achieve.

Safety (N) – The process of protecting valuable or sensitive data and AI models from being hacked by bad actors. be seen Discount.

the concerned (N) – Shareholders, regulators and users. People in power who want to stay happy.

Transparency (N) – Disclosure of your data and code. Harmful to sensitive and monopolized information. Hence it is really difficult; Quite frankly, even impossible. It should not be confused with clear communication about how your system actually works.

Trustworthy (Adjective) – an evaluation of an AI system that can be manufactured with sufficiently coordinated propaganda.

Comprehensive basic income (ph) – the notion that paying fixed salaries to everyone will solve the massive economic disruption that occurs when automation leads to large-scale job losses. Famous by 2020 presidential candidate Andrew Yang. be seen Redistribute wealth.

ratification (N) – The process of testing the AI ​​model on data other than the trained data, to verify that it is still accurate.

the value (N) – an intangible benefit that is offered to users that brings you a lot of money.

Value (N) – You have them. Remind people.

Redistribute wealth (Ph) – Useful clue Hanging around When people are scrutinizing you for using too much resources and making a lot of money. How will the redistribution of wealth work? Comprehensive basic income of course. Also, it is not something that you can discover on your own. May require regulation. be seen Regulation.

Post blocking (Ph) – The philanthropy of choosing not to unlock your source code because it might fall into the hands of a bad actor. I prefer to Restrict access to partners Who can afford it.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *