People will be more likely to trust an organisation that combines ethics with Data Management and AI. Ethical behaviour improves the way people interact with each other, which includes business, and helps the whole community. Using AI to build a good relationship with your customers can be good for both the company and your customers. (data science in Malaysia)
Businesses can be broke down into two main groups: those that need a steady flow of new customers and those that want to build a customer base that comes back again and again. For an internet business to be truly successful for a long time, it must follow basic ethical rules when collecting and using data. Expectations of trust are what keep an organisation going for the long run. For businesses that deal with data, the ethical use of that data has become an important part of the design of all trust models.
People who can see reality from a big picture point of view are usually moral. Then there are also people who seem to be ethical by nature, as if it is part of their DNA. However, about 30 percent of people seem to be okay with different levels of unethical behaviour (essentially, theft and deceit). As a result of their training or their genes, most people don’t feel like they have empathy. People and the community as a whole can be hurt by unethical behaviour, but a person or a small group can be reward. The short-term answer:
People who work with data and who have morals (data science in Malaysia)
The ethics of Data Management can be hard to figure out, and this could hurt short-term profits. As a result, some companies in the past just didn’t talk about it. Empathy and ethical concerns were easy to ignore when dealing with faceless customers on the internet. This lack of ethical behaviour led to the creation of laws (society’s way of enforcing ethics). As people become more aware of how their personal data and behaviour patterns can be use, laws are being made.
Internet businesses in Europe didn’t act in a moral way, which led to the creation of laws called the GDPR (or the GDPR). These laws protect the rights of people in the European Union to keep their personal information private. When you do business in the EU, you need to think about these laws as well. From a Data Management point of view, these laws must be obey, or the business could be fine a lot of money. After the UK left the EU, they made their own version of the GDPR.
There are no laws or regulations in the United States that are similar to the GDPR. California has passed a law called the CCPA. Most internet users in the United States don’t have the same legal protections, and “ethical considerations” aren’t taken into account in the name of profit.
When you have faith, things work out better.
Trust is not a permanent, fixed thing. As soon as it’s given, it can be lose in a second. Only with time and positive experiences can it be find again. Most businesses in most industrialised countries expect people to be honest when they do business with them. This means that trust is base on having expectations met. Honesty is the most important part of ethical behaviour, and it’s also important to communicate “reliable information,” which is both useful and important for keeping data safe.
The best example of deliberately manipulating data to support a hidden goal is a few bank employees who have decided to include racial bias in the bank’s loan screening process. People who work at a bank can’t use race to make loan decisions because of the law, so they came up with a way to screen people out based on where they live. From a big picture point of view, this kind of behaviour breaks the law, changes the integrity of the data, reduces profits, and harms the community by stifling the growth and improvement of people in certain neighbourhoods. In other words, it’s not right. When this kind of distortion is find, it’s call a “mistake that needs to be fix right away.”
Another example comes from Google, and it shows how algorithms can act in an unethically “biassed” way. People were upset about one of the algorithms that Google use in 2016. If a question is popular, Google can figure out what people will ask next. As an example, in 2016, when part of a question about a minority group was type into a search engine, the algorithm would show different endings to the question. The first option was a stereotyped response that led to a lot of anti-minority websites, which spread prejudice in the process. Google quickly fixed this specific problem, but the example shows how easy it is for algorithms to do things that aren’t right if no one is paying attention.
How bad things can get
This will make it even more difficult to tell the difference between real and fake news because of generative adversarial networks. Though this is still a very new technology, it is dangerous enough to raise a lot of ethical questions.
The algorithmic architectures are use to build two neural networks during this process. Afterward, these two AI networks are “pit” against each other (adversarily, with the help of game theory techniques). The goal is to build illusions that are similar to reality. When a “generator network” is use, it turns a number into an audio matrix or an image. It goes into what is call a “discriminator network,” which learns how to tell real content from fake. The two networks work together and learn from each other at the same time. As the generator network learns and improves its tricks for fooling the discriminator network, the discriminator network comes up with better and better ways to tell when something isn’t real.
This process makes things that look and sound like recordings of reality. It’s also not true. Even though these generative adversarial networks could be use to make art and political humour, they could also be use to make fake news and ads. A single dishonest video could do a lot of damage to the reputation of an organisation or a person.
Source: data science course malaysia , data science in malaysia