The growth of AI within the tech industry has been noticeable in the past year. Deloitte, in its second edition of the “State of Ethics and Trust in Technology” report, found that 74% of companies have begun testing generative AI, while 65% have started using it internally. The increasing awareness of AI’s capabilities has raised concerns about how organizations can use this technology in an ethical manner. The report surveyed 26 specialists across various industries and delivered a 64-question survey to over 1,700 businesses and technical professionals.
The survey revealed that many respondents thought cognitive technologies, such as generative AI, had the most potential for social good, but also posed the greatest potential for ethical risk. Interestingly, over half of the respondents said their company did not have or were unsure if they have ethical principles guiding the use of generative AI.
The report also highlighted concerns related to the unethical use of AI, such as data privacy, transparency, data poisoning, and intellectual property and copyright issues. Respondents also expressed concerns about the types of damage that could arise from ethical violations, including reputational damage, human damage, regulatory penalties, financial damage, and employee dissatisfaction.
To ensure the safe use of AI, Deloitte recommended a multi-step approach for companies, including exploration, foundational planning, governance, training and education, pilots, implementation, and audit. The report emphasized the importance of transparency in the implementation of AI, along with the need for companies to have a plan in place for addressing any issues that may arise.
The report also discussed the impact of generative AI on human workers, highlighting concerns about transparency, data privacy, and job displacement. It also mentioned that companies need to work together to identify risks and establish governance in order to generate stakeholder value and build a more equitable world.