In this article, the editor will introduce the relevant content and situation of artificial intelligence to help you improve your understanding of it. Read the following content with the editor.
1. Issues that should be paid attention to by artificial intelligence
The first is transparency. When the AI model makes decisions, can it be interpreted as something we can understand? We need the AI to be interpretable and understandable, so that we can truly trust the AI system.
Second is fairness. We want to reduce bias as much as possible in our AI datasets. Because many times, people are also biased, which is determined by our background, culture, and experience. Coding this bias into the code of artificial intelligence has a lot of negative consequences.
Then there is responsibility, if the development of artificial intelligence reaches a certain scale, if its performance conflicts with the original design, who will be responsible? Is it a developer, a tester, a designer, or a company releasing an AI version? We should have proper control mechanisms to avoid errors in these systems.
The safety of AI is also of concern. How can we guarantee that all datasets are safe and reliable, what if there are loopholes, attacked or threatened by hackers.
Governance is also an important aspect. Do we currently have corresponding policies to ensure that we have a reasonable governance mechanism. If something goes wrong in an AI system, how should we govern it so that it doesn’t spread to society as a whole.
2. Does artificial intelligence have legal rights?
Currently, AI can write, draw, and write news stories on its own. The next question is, are AI-generated content such as poems, paintings, songs, press releases, etc., original in copyright law works? If it is a work, does its rights belong to artificial intelligence or to human beings? If the rights belong to people, does it belong to the owner, user, designer or other subject of the AI? Therefore, the main legal issues faced by AI products are the legal nature of the products and the attribution of rights. Academia has some research results on this issue, but the debate is fierce and inconsistent. If some scholars believe that artificial intelligence can learn and create independently, the content it generates is original, and the copyright should belong to the robot. In this regard, opponents believe that if the rights are owned by robots, how to exercise them and who exercise them are inoperable; some scholars believe that robots are designed by programmers and should be enjoyed by designers; some scholars believe that we should learn from robots and designers. Commonly owned unit work, etc.
The iterative development of artificial intelligence has further liberated human labor from technologies such as self-driving cars, medical robots, and drones, while also bringing endless problems. For medical accidents caused by the use of intelligent machines, traffic accidents caused by automatic driving, and misjudgments caused by drones, how should the corresponding responsibilities be identified and identified, and who is responsible for the behavior of artificial intelligence? It has been argued that the allocation of responsibility should be combined with an analysis of the specific circumstances of robot damage. First of all, the damage caused by human manipulation of intelligent systems, such as hackers, viruses and other human factors invading the Internet, and then controlling the damage caused by intelligent robots, should be borne by the controllers behind them. Liability; the second is the damage caused by the defects of the intelligent system itself. If it is the user’s fault, the user shall be held responsible. If it is a loophole in an intelligent system or an “autonomous behavior” that cannot be controlled by artificial intelligence, the beneficiary or society will be responsible for replacement. This view is reasonable at this stage.
But if we talk about the era of strong artificial intelligence, the “artificial human” with the thinking and decision-making ability of inorganic neural networks such as algorithm programs and code rules is like a human being with an independent will in the form of neurons, an entity with a free spirit. They all have the ability to perform conscious and deliberate actions independently of the control of entities other than themselves. At that point, it no longer makes sense to ask the humans behind the AI to hold them accountable. At that time, artificial intelligence had rights, and it could acquire property through its own labor. In addition, in accordance with the principle of risk-taking by beneficiaries, beneficiaries or society can set up a fund or the insured to take responsibility for artificial intelligence.
The above is all the content brought by the editor this time. Thank you very much for your patient reading. If you want to know more related content, or more exciting content, please be sure to pay attention to our website.