Artificial Intelligence, Machine Learning, and Human Beings

Artificial Intelligence, Machine Learning, and Human Beings

In a conversation with HackerNoon CEO, David Smooke, he identified artificial intelligence as an area of technology in which he anticipates vast growth. He pointed out, somewhat cheekily, that it seems like AI could be further along in figuring out how to alleviate some of our most basic electronic tasks—coordinating and scheduling meetings, for instance. This got me reflecting on the state of artificial intelligence. And mostly why my targeted ads suck so much…

Hmm… so how close can a machine come to “meaning-making?”

Had to look into the term. Found The Meaning Making Model: A framework for under- standing meaning, spirituality, and stress-related growth in health psychology

The Meaning Making Model
The Meaning Making Model identifies two levels of meaning, global and situational (Park & Folkman, 1997). Global meaning refers to individuals’ general orienting systems and view of many situations, while situational meaning refers to meaning regarding a specific instance. Situational meaning comprises initial appraisals of the situation, the revision of global and appraised meanings, and the outcomes of these processes. Components of the Meaning Making Model are illustrated in Figure 1. The Meaning Making Model is discrepancy-based, that is, it proposes that people’s perception of discrepancies between their appraised meaning of a particular situation and their global meaning (i.e., what they believe and desire) (Park, 2010a) creates distress, which in turn gives rise to efforts to reduce the discrepancy and resultant distress.

It may not be the perfection definition, but interesting to consider whether it’s more difficult for machines to achieve global or situational meaning? When a machine makes a decision in an instance, and not as a learning from experiencing an aggregate of situations, it may be able to imitate:

“A person who is told “Seattle is not adjacent to Los Angeles on a map” may wonder, what if it was? They might fold their map is such a bizarre way as to bring Seattle and Los Angeles right next to each other, then settle into a sly, proud smile.”

Then again that folding of the map reminds me of A Wrinkle in Time (which may be based on String Theory). I think any decent sci-fi loving AI would have read the book and about the theory. Maybe the first truly original AI personalities will have an internet only outlook about how humans behave?

Interesting questions! (I wish I had a good answer…) I do want to emphasize that I do think that computers can imitate having “meaning.” I think from the outside looking in, it may seem like computers have meaning (passing the turing test, folding a map, etc.). It’s just that current AI is brittle, in the sense that we’ll have to throw a lot of engineering power behind it in order to make it look like a machine is meaning-making. If we ask a machine to slightly change the task at hand, we have to expect to rewrite algorithms and/or to retrain machines. And this is good for data scientists’ job security, but bad news if we’re trying to actually create a convincing general-purpose AI (thinking of robots like iRobot’s Sonny).

Lack of “context” for words seems not quite the issue. As far as I can tell, the connectionist natural language understanding work is only about context: the context in which words appear in text. That’s using data like proximity, sequence, and co-occurrence of different words. This would limit the semantic relations of words to things like “occurs_with”, “similar_to”, and “follows”. But it can get pretty sophisticated in that the same relations can hold between different sequences (again, this is another form of context) of words as well as between words. The texts that can be generated this way can be rather astounding (e.g., the big stir about GPT-2).
I think that the limitations you describe are in part the lack of more sophisticated semantic relations, such as opposite_of, negates, includes, subset_of, is_example_of, broader, and narrower. The problem here is that connectionist work rarely tries to incorporate other approaches from the symbolic/cognitive/ontological/meaning end of the mind sciences spectrum. These, for example, explicitly model many kinds of semantic relationships, try to deal with stories, scenarios and so forth.
Connectionist AI also is limited by not incorporating insights from psychology and philosophy. The latter are sometimes based on straw-man arguments like Searle’s Chinese Room, or Jackson’s Mary, the color scientist. They make us think, but they are not much of a guide to how minds really work, or how to make one work. An example of a more useful idea is that consciousness happens because the mind models reality and necessarily includes a model of itself. This has been proposed by Thomas Metzinger and others.
These interests led me to cook up a science fiction-ish account (theory and story) of how an AI might be made conscious. It presumes a more sophisticated mix of technologies, psychological processes and self theory. I’ve been waiting for push back …

1 Like

Thanks, Ted! I do think it’s possible for consciousness to inhabit a robot. So, I do agree with you there. But what I do disagree with this part: “the limitations you describe are in part the lack of more sophisticated semantic relations, such as opposite_of, negates, includes, subset_of, is_example_of, broader, and narrower” The issue, is that this type of semantic relations is reductionist–it assumes that X and Y are totally independent, and then must be joined together (“X = oppostie_of Y”). What I am arguing, that for humans, X is not totally independent, but rather is partially made up of Y. The meaning of tall is relative to the meaning of short, not simply that there is this abstract property, “tall” and we connect it with another abstract property “short.” Now, adding these sophisticated semantic relations certainly will improve the “believability” and flexibility of the AI. But, that doesn’t necessarily mean it is really conscious. You’re right that Searle and Mary don’t tell us how to make consciousness, but the thought experiments are at least litmus test for thinking about the presence of consciousness.

I think I see your point, that meanings are somehow entangled, not in some tidy set of relations. I’m at a loss to guess what this means for either AI or for understanding how our own cognition is implemented. Can you suggest something to look at, re the meaning issue?

Sorry for my slow reply! I wish I had a really good answer for you. Unfortunately. I think that the answer would entail something of a solution to the symbol-grounding problem–something that philosophers have been stuck on for some time. In the meanwhile, we can continue to create AI using traditional methods, and by layering more of those semantic relations you mentioned above. Plus, and I think this is the crucial piece, recognizing that AI is a helpful tool, but shouldn’t be the making final decisions of any kind of ethical weight. At least not yet. We want AI to be partnered with things that do make meaning (i.e. people).