We’ve collected the best AI Quotes from the greatest minds of the world: Dan Brown, Rana el Kaliouby, Geoffrey Hinton, Fei-Fei Li, Elon Musk. Use them as an inspiration.
I am often asked what the future holds for Emotion AI, and my answer is simple: it will be ubiquitous, engrained in the technologies we use every day, running in the background, making our tech interactions more personalized, relevant, authentic and interactive.
I do worry that organizations and even governments who own AI and data will have a competitive advantage and power, and those who don’t will be left behind.
My own work falls into a subset of AI that is about building artificial emotional intelligence, or Emotion AI for short.
I do think there should be some regulations on AI.
It is deeply against my principles to work on any project that I think is to weaponize AI.
I’m not yet convinced that we will face an unemployment problem created by AI. There will certainly be some occupations eliminated – drivers of vehicles, many production jobs, etc. Whether this creates mass unemployment depends on how quickly this happens. If it happens overnight, it will be a huge disruption.
I believe in the future of AI changing the world. The question is, who is changing AI? It is really important to bring diverse groups of students and future leaders into the development of AI.
AI might be a powerful technology, but things won’t get better simply by adding AI.
When AI approximates Machine Intelligence, then many online and computer-run RPGs will move towards actual RPG activity. Nonetheless, that will not replace the experience of ‘being there,’ any more than seeing a theatrical motion picture can replace the stage play.
The field of AI has traditionally been focused on computational intelligence, not on social or emotional intelligence. Yet being deficient in emotional intelligence (EQ) can be a great disadvantage in society.
We are focusing on four vertical markets – utilities, public sector, large enterprises, and transportation. And, we are building a software business as well that includes analytics, security, IOT platforms, and AI.
A lot of the game of AI today is finding the appropriate business context to fit it in. I love technology. It opens up lots of opportunities. But in the end, technology needs to be contextualized and fit into a business use case.
The real goal of AI is to understand and build devices that can perceive, reason, act, and learn at least as well as we can.
I think that AI will lead to a low cost and better quality life for millions of people. Like electricity, it’s a possibility to build a wonderful society. Also, right now, I don’t see a clear path for AI to surpass human-level intelligence.
Even a cat has things it can do that AI cannot.
Elon Musk is worried about AI apocalypse, but I am worried about people losing their jobs. The society will have to adapt to a situation where people learn throughout their lives depending on the skills needed in the marketplace.
We’re making this analogy that AI is the new electricity. Electricity transformed industries: agriculture, transportation, communication, manufacturing.
People are going to use more and more AI. Acceleration is going to be the path forward for computing. These fundamental trends, I completely believe in them.
We want to take AI and CIFAR to wonderful new places, where no person, no student, no program has gone before.
There’s a reason the Chinese government is very concerned about Ai Weiwei. It’s because he has all of these ingredients in his life that allow him to attract enormous attention across a very broad spectrum of the population.
I’m trying to use AI to make the world a better place. To help scientists. To help us communicate more effectively with machines and collaborate with them.
AI is creating tremendous economic value today.
As one of the leaders in the world for AI, I feel tremendous excitement and responsibility to create the most awesome and benevolent technology for society and to educate the most awesome and benevolent technologists – that’s my calling.
To say that AI will start doing what it wants for its own purposes is like saying a calculator will start making its own calculations.
On the path to ubiquity of AI, there will be many ethics-related decisions that we, as AI leaders, need to make. We have a responsibility to drive those decisions, not only because it is the right thing to do for society but because it is the smart business decision.
We all have a responsibility to make sure everyone – including companies, governments, and researchers – develop AI with diversity in mind.
If we could communicate at the speed of thought, we can augment our creativity with the low-level stuff that AI and robots and 3-D printers and fab labs and all that do.
India has a large base of tech talent, and I hope that a lot of AI machine learning education online will allow Indian software professionals to break into AI.
The development of exponential technologies like new biotech and AI hint at a larger trend – one in which humanity can shift from a world of constraints to one in which we think with a long-term purpose where sustainable food production, housing, and fresh water is available for all.
In healthcare, we are beginning to see that AI can read the radiology images better than most radiologists. In education, we have a lot of data, and companies like Coursera are putting up a lot of content online.
Look, when AIs come up, they’re not going to be like us. A self-aware, sentient AI is not going to be like a human.
We see incredible opportunity to solve some of the biggest social challenges we have by combining high-performance computing and AI – such as climate change and more.
Weaponized AI is probably one of the most sensitized topics of AI – if not the most.
My dream is to achieve AI for the common good.
In the past, much power and responsibility over life and death was concentrated in the hands of doctors. Now, this ethical burden is increasingly shared by the builders of AI software.
When people speak of creating superhumanly intelligent beings, they are usually imagining an AI project.
We really believe that long-term, the way AI will drive is similar to the way humans drive – we don’t break the problem down into objects and vision and localization and planning. But how long it will take us to get there is questionable.
OpenAI is doing important work by releasing tools which promote AI to be developed in the open. Compute power is largely produced by NVIDIA and Intel and still relatively expensive but openly purchasable. Blockchains may be the key final ingredient by providing massive pools of open training data.
AI is going to be extremely beneficial, and already is, to the field of cybersecurity. It’s also going to be beneficial to criminals.
The inspiration for ‘Ai Dil Mere‘ happened at 4:30 A.M. on a Sunday.
I think of AI itself as a monster of capitalism.
I am super optimistic about the near-term prospects of AI because every time there is a technological disruption, it gives us the opportunity of making the world a little different.
Emotion AI will be ingrained in the technologies we use every day, running in the background, making our tech interactions more personalized, relevant, authentic, and interactive.
With Emotion AI, we can inject humanity back into our connections, enabling not only our devices to better understand us, but fostering a stronger connection between us as individuals.
AI has been making tremendous progress in machine translation, self-driving cars, etc. Basically, all the progress I see is in specialised intelligence. It might be hundreds or thousands of years or, if there is an unexpected breakthrough, decades.
I think that solving the job impact of AI will require significant private and public efforts. And I think that many people actually underestimate the impact of AI on jobs. Having said that, I think that if we work on it and provide the skill training needed, then there will be many new jobs created.
Secrecy is the underlying mistake that makes every innovation go wrong in Michael Crichton novels and films! If AI happens in the open, then errors and flaws may be discovered in time… perhaps by other, wary AIs!
Elon Musk, Stephen Hawking, and others have stated that they think AI is an existential risk. I disagree. I don’t see a risk to humanity of a ‘Terminator’ scenario or anything of the sort.
There are three basic approaches to AI: Case-based, rule-based, and connectionist reasoning.