Microsoft President Brad Smith is advocating for the implementation of a licensing regime for artificial intelligence (AI) systems. He suggests that AI systems with potential safety risks should require licenses, and companies should be held accountable for any breaches of privacy or violations of civil rights caused by their AI models. Smith testified in support of a regulatory framework proposed by Senators Richard Blumenthal and Josh Hawley, which calls for the creation of a licensing entity for sophisticated or potentially dangerous AI models. Smith believes that regulation is necessary for AI, and that the tech industry should play a large role in writing the rules. He also highlighted the importance of government pushing the industry to go further in terms of regulation. Microsoft, along with other tech companies, has already signed voluntary safety guidelines released by the White House earlier this year. Smith sees this process as a model for government entities to approach AI regulation. While Congress is unlikely to pass major AI legislation this year, Smith is optimistic that regulation will come eventually. He suggests that other products with safety risks, such as motor vehicles and prescription drugs, have long been regulated, and now it is time for AI to adhere to similar levels of oversight.
>>Join our Facebook Group be part of community. <<