The Storm Crawler started crawling data but it seems I cannot find where data is stored I need to save this data to to a database so I can connect the data to a remote server and have it indexed. Storm crawler seems to focus mainly on Solr and Elastic integration!!! I just want to store its data to a database so i can use with any site search solution like Typesense, flex search and any other site search software.
Im very new at the web craping so im just installed required software to run.
SC does not focus on SOLR or ES. There is for instance a SQL module. As for "where the crawled results go", it depends on what your topology does. Maybe try to explain what bolts are running etc...