I came across something which showed that you can write analytic functions in Hive.
For example: for a wordcount example, we can write in hive as well. the difference would be that in Aster data it is already in-built whereas in hive we have to write it.
What will be the difference? why go for which?
Theoretically, yes, Hive should be able to do all the same as both feature Java code and map-reduce frameworks. I am not a user of Hadoop/Hive but my understanding is that Hive is a layer on top of Hadoop and everything Hive does (including analytical extensions written in Java) will get translated into Hadoop jobs. You may want to ask Hive-directed question on how/what it takes to do it.
On contrary, Aster SQL/MR is native to Aster database. By native I mean that Java runs within each Aster node as part of Aster SQL/MR framework, which in turn is integral part of Aster database engine. All data manipulations will be consistent with the data model, data distribution keys, etc. In Aster, while using its SQL/MR functions (including Java-based) user never leaves premises of SQL and the data model. At the same time SQL/MR is polymorphic to table definitions adapting for arbitrary models (all within Aster SQL). Maybe you want to investigate how this would work in Hive.
Another point to make is that Aster offers rich set of high-level analytical functions out of the box so that writing custom Java SQL/MR may not be necessary. Thus, word counting example could be executed using
nGram
function and aggregate SQL.