The default language is English, which is also what is stored in
Elasticsearch. Thus, if the user did not specify a language via headers
or query parameter, there is no reason to call Placeholder.
Some Placeholder responses for language translation can be 30KB, and all
that JSON takes considerable time to parse
In some error cases, a warning is repeated many many times. It turns out
there is code checking against elasticsearch error codes, and warning
_each_ time it fails to match aganst one of them.
Many times, the error that is being compared is not from elasticsearch,
and in any case, the [elasticsearch-exceptions](https://www.npmjs.com/package/elasticsearch-exceptions)
module is 3 years old. We should rewrite most of this code and stop
using that module.
For now, this at least reduces log noise.
Since we use the `PORT` env var to configure the port, this value will
often be out of date. It leads to some confusing output in, for example
`docker-compose ps` when using a port other than the default:
```
julian@manhattan ~/repos/pelias/dockerfiles $ docker-compose ps
Name Command State Ports
---------------------------------------------------------------------------------------------------------------------
pelias_api ./bin/start Up 3100/tcp, 0.0.0.0:4000->4000/tcp
```
Notice how the Ports section shows 3100, even though nothing is running
on that port in the container, because the `PORT` env var was set to
4000.
Queries that specified only non-corase layers (address or venue) and had
no results returned from Elasticsearch would trigger a request to the
PIP service.
The PIP service does not contain any addresses or venues so this query
will never return anything, and only waste time.
There were a couple problems with the current dockerfile:
* It set the userid of the processes running in the container to 9999,
without creating a user with that ID. This leads to confusion and an
annoying message when you run an interactive bash session (the shell PS1
would display something like `I have no name!@1438586f786e:~$`
* It tried to run `chown` on _all_ code files after running NPM install.
This takes a really long time
* It did not copy `package.json` and run `npm install` before copying
other files. This means even a one line code change causes the image
rebuild process to re-run `npm install`, which takes 30 seconds or so
Now the image creates and uses a pelias user, sets permissions correctly
from the start to avoid `chown`, and only runs `npm install` when it
absolutely has to.