John Mueller has recently updated his Google+ blogging on how indexing is done on progressive Web-App and JavaScript sites.
He provided many suggestions, few of them are:
GoogleBot
It is not recommended to use GoogleBot. Instead, it’s more advisable to use “progressive enhancement” techniques and “feature detection” techniques in order to make the content available for all users. It is better to avoid the redirection of to the “unsupported browser” webpage. Instead, a polyfill can always be used. GoogleBot currently does not support features like Fetch API, Service Workers, requestAnimationFrame, and Promises.
rel=canonical attribute usage
It is always better to use “rel=canonical” at a time when the serving of content to and from multiple URLs are required.
Usage of “#” – tag in URLs
Indexing of URLs containing “#” is pretty rare for GoogleBot. Therefore, the usage of normal URLs containing filename/path/query and other such parameters instead is advisable. Usage of the History API can be considered for navigation.
Checking of web pages
GoogleBot doesn’t support “#” or “#!” URLs. Therefore, it is advised to use the renderer and fetch tool to check how the bot sees the web pages.
Embedded resources, too many can cause a problem
It is advised to put a limit on the number of resources embedded into the web page. To be more precise, limiting the number of server responses and JS files required would be better. More the number of URLs required more the chances of timeouts and renders without the embedded resources. Usage of HTTP catching up to a reasonable level is suggested.
As in general, an important piece of information in the web page is not supposed to be hidden under JS. Google being, the search engine giant may be able to index the information to most extent, but other search engines might still be having difficulties to do the same.