This is why content generated via JS (with no fallback option) is a Really Bad Idea.
Step one is to get that right, and have your entire site (or as much of it as makes sense) working nicely this way for search bots and Lynx-like user agents.
Then add a visual layer: CSS/graphics/media for visual polish, but don't significantly change your original (X)HTML markup; allow the original text-only site to stay intact and functioning. Keep your markup clean!
This is called progressive enhancement in web design circles. Do it this way and your site works, in some reasonable form, for everyone.
if a search engine also cannot see the generated HTML then there is not much to index
On the other hand, you can sniff a search engine's user agent and serve it something readable. But search engines don't usually like this and will penalize you pretty severely if they detect differences with what you send to a normal browser.
A good rule of thumb: If you can see it in Lynx, it can indexed by Google.
Lynx is an excellent test because it also gives you an idea of how screen readers for the blind will see your page as well.
There are a few ways to handle this in GWT, this is a great discussion on the subject. Seems like the best option is to serve up static SEO content when the user-agent is a bot, as long as the SEO content is identical to what is served via the GWT route. This can be a lot of work, but if you really want a fully rich GWT app that is optimized for search engines it may be worth it.
Take a look to the Single Page Interface Manifesto of how a SPI (AJAX intensive) application can get indexed by Google and other crawlers. How hard is depends on the web framework used.
Even if they execute the basic
FRAMEWORKS , I doesn't think so that a bot like google bot or any other spider will also load
Js files linked with webpage and without loading them the JS code will produce errors.
/*Correct Me If I am wrong*/
©2020 All rights reserved.