• About
  • Disclaimer
  • Privacy Policy
  • Contact Us
Friday, September 22, 2023
Tech Fashion Web
  • Home
  • Fashion
    • Footwear
    • Culture
  • Technology
    • Tech Solution
    • Website Design
    • Cyber Security
    • Software
  • Business
  • Digital Marketing
    • SEO
    • Social Media
  • Travel
  • Entertainment
    • Music
    • Celebrity
  • Health
    • Food
  • lifestyle
    • Home
  • More
    • Gaming
    • Gadgets
    • Education
    • Electronics
    • Gadgets
    • Reviews
    • Law
No Result
View All Result
  • Home
  • Fashion
    • Footwear
    • Culture
  • Technology
    • Tech Solution
    • Website Design
    • Cyber Security
    • Software
  • Business
  • Digital Marketing
    • SEO
    • Social Media
  • Travel
  • Entertainment
    • Music
    • Celebrity
  • Health
    • Food
  • lifestyle
    • Home
  • More
    • Gaming
    • Gadgets
    • Education
    • Electronics
    • Gadgets
    • Reviews
    • Law
No Result
View All Result
Tech Fashion web
No Result
View All Result
Microsoft Plans to Eliminate Face Analysis Tools in Push for ‘Responsible A.I.’

Microsoft Plans to Eliminate Face Analysis Tools in Push for ‘Responsible A.I.’

by Tech Fashion
June 21, 2022
in Technology
0
325
SHARES
2.5k
VIEWS
Share on FacebookShare on Twitter


For years, activists and academics have expressed concern that facial analysis software that claims to identify a person’s age, gender and emotional state could be used. prejudicedunreliable or invasive – and should not be sold.

Microsoft acknowledged some of those criticisms and said on Tuesday it plans to remove those features from its software artificial intelligence service for detecting, analyzing and recognizing faces. They will no longer be available to new users this week and will be phased out for existing users within the year.

The changes are part of Microsoft’s move to tighten controls on its artificial intelligence products. After a two-year review, a team at Microsoft has developed a “Responsible AI Standard,” a 27-page document that outlines the requirements for AI systems to ensure they will not have a harmful impact on society.

The requirements include ensuring that systems provide “valid solutions to the problems they are designed to solve” and “comparable quality of service for identified demographics, including marginalized groups.”

Before being released, technologies that would be used to make important decisions about a person’s access to work, education, health care, financial services or any opportunity at life are subject to an assessment by a team led by Natasha Crampton, the AI ​​responsible. official at Microsoft .

There were heightened concerns at Microsoft about the emotion recognition tool, which labeled a person’s expression as anger, contempt, disgust, fear, happiness, neutral, sadness or surprise.

“There is a tremendous amount of cultural, geographic and individual variation in the way we express ourselves,” said Ms Crampton. That raised concerns about reliability, along with the bigger questions of whether “facial expression is a reliable indicator of your internal emotional state,” she said.

The age and gender analysis tools being eliminated — along with other tools to detect facial features such as hair and smile — could be useful for interpreting visual images for the blind or partially sighted, for example, but the company decided it was problematic to use the profiling tools. that are generally available to the public, Ms Crampton said.

In particular, she added, the system’s so-called gender classification was binary, “and that’s not consistent with our values.”

Microsoft will also be putting new checks on its facial recognition feature, which can be used to perform identity checks or search for a specific person. Uber, for example uses the software in his app to check if a driver’s face matches the ID registered to that driver’s account. Software developers who want to use Microsoft’s facial recognition tool must request access and explain how they want to use it.

Read more about artificial intelligence

Users must also submit an application and explain how they will use other potentially abusive AI systems, such as: Custom Neural Voice† The service can generate a human voiceprint based on a sample of someone’s speech, so authors can, for example, create synthetic versions of their voice to read their audiobooks in languages ​​they don’t speak.

Because of the potential abuse of the tool — to give the impression that people have said things they haven’t said — speakers must go through a series of steps to confirm that their voices are allowed, and the recordings contain watermarks provided by Microsoft. can be detected.

“We are taking concrete steps to live up to our AI principles,” said Ms. Crampton, who spent 11 years as a lawyer at Microsoft and joined the AI ​​ethics group in 2018. “It’s going to be a huge journey.”

Microsoft, like other tech companies, has had problems with its artificially intelligent products. In 2016, it released a chatbot on Twitter called Tay, which was designed to learn “conversational understanding” from the users it interacted with. The bot started to squirt quickly racist and offensive tweetsand Microsoft had to remove it.

In 2020, researchers found that speech-to-text tools developed by Microsoft, Apple, Google, IBM and Amazon worked less well for black people† Microsoft’s system was the best of the bunch, but misidentified 15 percent of words for white people, compared to 27 percent for black people.

The company had collected various speech data to train its AI system, but hadn’t understood how diverse language could be. So it hired a sociolinguistic expert from the University of Washington to explain the language varieties Microsoft needed to know. It went beyond demographics and regional variation in how people speak in formal and informal situations.

“If you think race is a determinant of how someone speaks, that’s actually a little misleading,” Ms Crampton said. “What we learned in consultation with the expert is that, in fact, a huge range of factors influence language variation.”

Ms Crampton said the journey to resolve that disparity between speech and text had helped inform the guidelines set forth in the company’s new standards.

“This is a critical normative period for AI,” she said, noting: Europe’s proposed regulations set rules and limits on the use of artificial intelligence. “We hope with our standard to contribute to the clear, necessary discussion that needs to be had about the standards that technology companies must adhere to.”

A lively debate about the potential harm of AI has been going on in the technology community for years, fueled by errors and mistakes that have real consequences on people’s lives, such as algorithms that determine whether people receive benefits or not. Dutch Tax Authority wrongly removed childcare allowance of needy families when flawed algorithm people with dual citizenship.

Automated face recognition and analysis software is particularly controversial. Last year, Facebook shut down the decade-old system for identifying people in photos. The company’s vice president of artificial intelligence cited the “many concerns about the place of facial recognition technology in society.”

Several black men have been wrongly arrested after flawed facial recognition matches. And in 2020, at the same time as the Black Lives Matter protests following the George Floyd police murder in Minneapolis, Amazon and Microsoft imposed moratoriums on the use of their facial recognition products by police in the United States, saying: clearer laws about its use were required.

From that moment on, Washington and Massachusetts have passed a regulation requiring, among other things, judicial oversight of the use of facial recognition tools by the police.

Ms. Crampton said Microsoft had considered making its software available to law enforcement in states with laws on the books, but had decided against doing so for now. She said that could change as the legal landscape changed.

Arvind Narayanan, computer science professor at Princeton and prominent AI expertsaid companies may be stepping back from technologies that analyze the face because they were “more visceral, as opposed to several other types of AI that may be questionable, but we don’t necessarily feel in our bones.”

Companies may also realize that, at least for now, some of these systems are not as commercially valuable, he said. Microsoft couldn’t say how many users it had for the facial analytics features it’s removing. Mr Narayanan predicted that companies would be less likely to forgo other invasive technologies such as targeted advertising, where people are profiled to pick the best ads to show them, because they are a “cash cow.”



Source link

Share130Tweet81Share33
Previous Post

Linux has a new 2FA authenticator app, and it’s easy enough for anyone to use

Next Post

MapQuest and Other Internet Zombies

Tech Fashion

Tech Fashion

Related Posts

Which tool is best for your business?
Technology

Which tool is best for your business?

by Tech Fashion
March 24, 2023
LG Will Spend $5.5 Billion on a Battery Factory in Arizona
Technology

LG Will Spend $5.5 Billion on a Battery Factory in Arizona

by Tech Fashion
March 24, 2023
Intel Announces New vPro Platform Running on 13th Generation Core
Technology

Intel Announces New vPro Platform Running on 13th Generation Core

by Tech Fashion
March 24, 2023
Beat the roaming charges with this eSIM
Technology

Beat the roaming charges with this eSIM

by Tech Fashion
March 23, 2023
Next Post
MapQuest and Other Internet Zombies

MapQuest and Other Internet Zombies

How to quickly reply to email using Quick Parts in Microsoft Outlook

How to quickly reply to email using Quick Parts in Microsoft Outlook

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

No Result
View All Result

Categories

  • Business (2)
  • Celebrity (10)
  • Culture (8)
  • Education (8)
  • Entertainment (3)
  • Fashion (12)
  • Food (7)
  • Footwear (7)
  • Health (6)
  • Lifestyle (14)
  • Music (6)
  • Social Media (2)
  • Software (4)
  • Tech Solution (1)
  • Technology (1,884)
  • Travel (12)
  • Website Design (2)

Recent.

Which tool is best for your business?

Which tool is best for your business?

March 24, 2023
LG Will Spend $5.5 Billion on a Battery Factory in Arizona

LG Will Spend $5.5 Billion on a Battery Factory in Arizona

March 24, 2023
Intel Announces New vPro Platform Running on 13th Generation Core

Intel Announces New vPro Platform Running on 13th Generation Core

March 24, 2023
Tech Fashion Web

We bring you the best Premium WordPress Themes that perfect for news, magazine, personal blog, etc. Check our landing page for details.

Category

  • Business
  • Celebrity
  • Culture
  • Education
  • Entertainment
  • Fashion
  • Food
  • Footwear
  • Health
  • Lifestyle
  • Music
  • Social Media
  • Software
  • Tech Solution
  • Technology
  • Travel
  • Website Design

Recent Posts

  • Which tool is best for your business? March 24, 2023
  • LG Will Spend $5.5 Billion on a Battery Factory in Arizona March 24, 2023
  • Intel Announces New vPro Platform Running on 13th Generation Core March 24, 2023

Contact Us

    © 2021 techfashionweb.com . All rights reserved.

    No Result
    View All Result
    • Home
    • Fashion
      • Footwear
      • Culture
    • Technology
      • Tech Solution
      • Website Design
      • Cyber Security
      • Software
    • Business
    • Digital Marketing
      • SEO
      • Social Media
    • Travel
    • Entertainment
      • Music
      • Celebrity
    • Health
      • Food
    • lifestyle
      • Home
    • More
      • Gaming
      • Gadgets
      • Education
      • Electronics
      • Gadgets
      • Reviews
      • Law

    © 2021 techfashionweb.com . All rights reserved.