Microsoft Thailand calls for AI regulations
The Thailand unit of United States-based tech company Microsoft advocated last week for a regulatory environment for artificial intelligence (AI) that would promote the growth of the technology that is starting to sweep the Kingdom while also guarding against possible misuse, abuse or unintended consequences.
“AI adoption is ready to take off in Thailand, and it’s important to note that the government needs to prepare the regulatory environment to ensure innovation development and user protection,” said Ome Sivadith, national technology chief of Microsoft Thailand.
Sivadith called on the government and regulators to engage with stakeholders so they can develop rules for AI use and development in Thailand in an informed manner. He encouraged them to fund research and development into AI, and publish principles and best practices for its use.
Microsoft executives have been in discussions with the Bank of Thailand and the Securities and Exchange Commission on these issues, he said.
The South China Morning Post agreed with Sivadith’s assessment, reporting last year that AI “is moving forward at full steam” in Thailand. It pointed out that IBM is working with the Thai government and corporations to help them employ its Watson AI technology responsibly and efficiently.
Jarit Sidhu, research manager at IDC Asia Pacific, a leading global research firm based in the U.S., said, “Thailand is moving forward at the right pace to integrate emerging technologies like AI into business and daily life.”
Many nations are grappling with similar issues. The Thai government has been at the forefront in the region for adopting digital currencies and other forms of digital development. However, the potentials, and possible problems and unintended consequences of AI present an exponentially more perplexing challenge for regulators. The technology is broad in its application, is at an early stage of adoption, and its use raises ethical questions in some contexts.
While AI can simplify life and make businesses more productive, efficient, profitable, instances of misuse and fraud have already taken place, Sivadith said. He pointed to an example of a video of former U.S. President Barack Obama that he said was faked.
Accountability needs to be considered, he said, in case developers create AI applications that end up causing harm to people through malfunctions or lack of foresight. What would happen, he asked, if a self-driving car caused an accident? Legal frameworks need to be established to govern those kinds of situations.
Regulators must also ensure transparency. In the banking business, Sivadith said, credit-scoring algorithms could be formulated in ways that could discriminate against certain types of customers. Regulators need to develop regimes that will guard against that and expose it when it happens.
Image : https://news.microsoft.com/th-th/2017/06/22/digitalfuture_en/