We start with a literal listener: a Bayesian agent who updates her prior beliefs (given by the `worldPrior` function) into posterior beliefs by assuming the utterance is true in the actual world.

``````var literalListener = function(utterance) {
Infer({
model() {
var world = worldPrior()
var m = meaning(utterance, world)
factor(m ? 0 : -Infinity)
return world
}
})
}
``````

To flesh out this model, we need the `worldPrior`, the `utterancePrior`, and the `meaning` function which evaluates and utterance in a given world. We will start with a very simple scenario: there is a (known and fixed) set of 3 people, and an unknown number of these people (between 0 and 3) is nice. The three equally probable utterances are ‘none/some/all of the people are nice’, and these utterances get their standard (highly intuitive) meanings.

``````///fold:
var literalListener = function(utterance) {
Enumerate(function(){
var world = worldPrior()
var m = meaning(utterance, world)
factor(m?0:-Infinity)
return world
})
}
///

var worldPrior = function() {
var num_nice_people = randomInteger(4) //3 people.. 0-3 can be nice.
return num_nice_people
}

var utterancePrior = function() {
var utterances = ["some of the people are nice",
"all of the people are nice",
"none of the people are nice"]
var i = randomInteger(utterances.length)
return utterances[i]
}

var meaning = function(utt,world) {
return utt=="some of the people are nice"? world>0 :
utt=="all of the people are nice"? world==3 :
utt=="none of the people are nice"? world==0 :
true
}

viz(literalListener("some of the people are nice"))
``````

If you evaluate the above code box, you will see that the inferred meaning of “some of the people are nice” is uniform on all world states where at least one person is nice – including the state in which all people are nice. This fails to capture the usual ‘some but not all’ scalar implicature.

We can move to a more Gricean listener who assumes that the speaker has chosen an utterance to convey the intended state of the world:

``````///fold:
var literalListener = function(utterance) {
Enumerate(function(){
var world = worldPrior()
var m = meaning(utterance, world)
factor(m?0:-Infinity)
return world
})
}

var worldPrior = function() {
var num_nice_people = randomInteger(4) //3 people.. 0-3 can be nice.
return num_nice_people
}

var utterancePrior = function() {
var utterances = ["some of the people are nice",
"all of the people are nice",
"none of the people are nice"]
var i = randomInteger(utterances.length)
return utterances[i]
}

var meaning = function(utt,world) {
return utt=="some of the people are nice"? world>0 :
utt=="all of the people are nice"? world==3 :
utt=="none of the people are nice"? world==0 :
true
}
///

var speaker = function(world) {
Infer({
model(){
var utterance = utterancePrior()
factor(world == sample(literalListener(utterance)) ?0:-Infinity)
return utterance
}
})
}

var listener = function(utterance) {
Infer({
model() {
var world = worldPrior()
factor(utterance == sample(speaker(world)) ?0:-Infinity)
return world
}
})
}

viz(listener("some of the people are nice"))
``````

If you evaluate the above code box you will see that it does capture the scalar implicature!

This simple Rational Speech Acts model was introduced in Frank and Goodman (2012) and Goodman and Stuhlmueller (2013). It is similar to the Iterated Best Response, and other game theoretic models of pragmatics. The RSA has been extended and applied to a host of phenomena.

## Optimizing inference

### Combining factor and sample

The search space in `speaker` and `literalListener` is needlessly big because the factors provide hard constraints on what the embedded listener/speaker can return. Indeed, `factor( v == sample(d) ?0:-Infinity)` for a distribution `d` is equivalent to `factor(d.score(v))`.

``````///fold:
var literalListener = function(utterance) {
Infer({
model() {
var world = worldPrior()
var m = meaning(utterance, world)
factor(m?0:-Infinity)
return world
}
})
}

var worldPrior = function() {
var num_nice_people = randomInteger(4) //3 people.. 0-3 can be nice.
return num_nice_people
}

var utterancePrior = function() {
var utterances = ["some of the people are nice",
"all of the people are nice",
"none of the people are nice"]
var i = randomInteger(utterances.length)
return utterances[i]
}

var meaning = function(utt,world) {
return utt=="some of the people are nice"? world>0 :
utt=="all of the people are nice"? world==3 :
utt=="none of the people are nice"? world==0 :
true
}
///

var speaker = function(world) {
Infer({
model() {
var utterance = utterancePrior()
var L = literalListener(utterance)
factor(L.score(world))
return utterance
}
})
}

var listener = function(utterance) {
Infer({
model(){
var world = worldPrior()
var S = speaker(world)
factor(S.score(utterance))
return world
}
})
}

viz(listener("some of the people are nice"))
``````

### Caching

``````///fold:
var worldPrior = function() {
var num_nice_people = randomInteger(4) //3 people.. 0-3 can be nice.
return num_nice_people
}

var utterancePrior = function() {
var utterances = ["some of the people are nice",
"all of the people are nice",
"none of the people are nice"]
var i = randomInteger(utterances.length)
return utterances[i]
}

var meaning = function(utt,world) {
return utt=="some of the people are nice"? world>0 :
utt=="all of the people are nice"? world==3 :
utt=="none of the people are nice"? world==0 :
true
}
///

var literalListener = cache(function(utterance) {
Infer({
model() {
var world = worldPrior()
var m = meaning(utterance, world)
factor(m?0:-Infinity)
return world
}
})
})

var speaker = cache(function(world) {
Infer({
model() {
var utterance = utterancePrior()
var L = literalListener(utterance)
factor(L.score(world))
return utterance
}
})
})

var listener = function(utterance) {
Infer({
model() {
var world = worldPrior()
var S = speaker(world)
factor(S.score(utterance))
return world
}
})
}

viz(listener("some of the people are nice"))
``````

## With semantic parsing

What if we want more complex worlds, and don’t want to hard code the meaning of sentences? The section on semantic parsing shows how to implement a literal listener that computes the meaning value of a sentence by compositionally building it up from the meanings of words. We can simply plug that parsing model in to the above pragmatic speaker and listener, resulting in a combined model.