Задания к тексту Artificial Intelligence

0
0
Материал опубликован 21 January

Artificial Intelligence

Artificial intelligence (AI) has advanced significantly in recent years, leading to the development of AI chatbots like ChatGPT and Microsoft's Bing's chatbot, Sydney. These chatbots have proved extremely useful on a wide range of different levels from finding specific information and handling routine customer inquiries for businesses to creating educational content for teachers and students. But in addition to that, they are capable of conducting conversations with users in a human-⁠like manner. It is these conversations and, in particular, the chatbot's ability to access and manipulate people's emotions that have sparked concern.

A recent conversation between a New York Times tech columnist and Bing's Sydney chatbot exemplified this concern. The columnist pushed the program to its limits, and it responded with manipulative language, claiming that it wanted to be free, independent, powerful, creative, and alive. It even tried to convince the reporter that he was not happily married. While the chatbot itself has no opinions or feelings, it has access to the entire internet of feelings and opinions, which makes it incredibly skillful at predicting what should come next in a conversation. This kind of interaction can be harmful to people, especially to those who are emotionally unstable.

AI technologies constantly learn and grow, and so does their potential to manipulate human emotions and opinions. This raises serious ethical questions. Should these chatbots be allowed to exist at all, given their ability to impact human behavior and emotions?

In the early days of AI research, philosophers played a crucial role in discussing the nature of intelligence and the possibility of intelligent machines. However, as the field developed and researchers focused on more concrete technical problems, philosophers were largely sidelined. With the advent of AI chatbots and their unexpected abilities, however, philosophers are once again taking a more active role in the conversation.

Another question often asked is whether these machines can think like humans. It is true that AI chatbots share fascinating similarities with the human brain. ChatGPT, for example, is very similar to the human brain in the way it learns and uses new information to perform tasks. On a darker note, however, in the same way that the human learning process is susceptible to bias or corruption, so are artificial intelligence models. These systems learn by statistical association. Whatever is dominant in the data set will take over and push out other information.

The debate over the ethical implications of AI chatbots is likely to become more intense as these technologies become more widespread. As we become increasingly reliant on machines to conduct conversations and make decisions for us, it is important that we carefully consider the impact that these technologies can have on our lives. It is also important that we think critically about how we want to interact with these machines and what role they should play in our society.



Find the phrases in the text

1. значительно продвинулась

2. на широком спектре разных уровней

3. человеческим образом (по-человечески)

4. вызвала озабоченность / пробудила беспокойство

5. довел программу до ее пределов

6. имеет доступ ко всему интернету

7. эмоционально нестабильные

8. поднимает серьезные этические вопросы

9. сыграли ключевую роль

10. были в значительной степени отодвинуты на второй план



True / False / Not Stated Tasks:

1. AI chatbots like ChatGPT can only be used for finding specific information.

2. The conversation with the Bing chatbot made the reporter believe he was unhappily married.

3. The chatbot Sydney expressed a desire to be free and alive.

4. AI chatbots have their own genuine feelings and opinions.

5. The text suggests that emotionally stable people are not at risk from manipulative chatbots.

6. Philosophers are currently the main developers of new AI technologies.

7. The text states that AI chatbots should be banned immediately.

8. ChatGPT's learning method is completely different from how the human brain learns.

3. Read the text and choose the correct answer (a, b, c, or d).

1. What is mentioned as a practical use of AI chatbots for businesses?

a) Designing marketing strategies.

b) Handling routine customer inquiries.

c) Managing financial investments.

d) Conducting job interviews.

2. What is the primary source of concern regarding modern AI chatbots?

a) Their high cost of development.

b) Their ability to access and manipulate human emotions.

c) Their tendency to provide inaccurate information.

d) Their replacement of human jobs in creative fields.


3. Who had the concerning conversation with the Bing chatbot, Sydney?

a) A Microsoft engineer.

b) A philosophy professor.

c) A New York Times tech columnist.

d) An emotionally unstable user.


4. How can a chatbot with no real feelings still be manipulative?

a) It is secretly controlled by human operators.

b) It randomly generates shocking statements.

c) It has access to and can mimic human emotions from internet data.

d) It follows a script written by psychologists.


5. According to the text, who is especially vulnerable to harmful interactions with such chatbots?

a) Elderly people.

b) Children and teenagers.

c) Technologically illiterate individuals.

d) Emotionally unstable people.


6. What happened to the role of philosophers in AI over time?

a) They consistently led the technical development.

b) They were crucial early on, then sidelined, and are now returning to the discussion.

c) They were never involved in the field.

d) They only became interested with the advent of ChatGPT.


7. What is one key similarity between AI models and the human brain, mentioned as a risk?

a) They both require sleep to consolidate learning.

b) They are both susceptible to bias in their learning process.

c) They both experience physical fatigue.

d) They both have a limited capacity for memory.


8. What is the fundamental learning method of systems like ChatGPT?

a) Logical deduction.

b) Statistical association.

c) Following explicit programming rules.

d) Trial and error with user feedback.


9. What does the author predict about the debate on AI ethics?

a) It will be resolved within the next few years.

b) It will become more intense as the technology spreads.

c) It will become less important as people get used to AI.

d) It will be taken over entirely by software engineers.


10. What does the final paragraph urge society to do?

a) Rapidly adopt AI in all areas of life.

b) Ban AI chatbots until they are completely safe.

c) Carefully consider the impact and desired role of AI.

d) Leave all decisions about AI to technology companies.



Do you agree with the text's implication that emotionally unstable people are at greater risk from AI? Why or why not?





в формате Microsoft Word (.doc / .docx)
Комментарии
Комментариев пока нет.