Can't connect to a SPARQLRepository unsing openrdf (sesame), in the mapper class of a Hadoop/Mapreduce job

121 views Asked by At

I did write a Java application using Sesame (RDF4j) API to test the availability of >700 SPARQL endpoints, but it takes hours to complete, so I'm trying to distribute this application using Hadoop/MapReduce framwork.

The problem now is that, in the mapper class, the method that should test the availability didn't work, I think that couldn't connect to the endpoint.

Here the code I used:

public class DMap extends Mapper<LongWritable, Text, Text, Text> {

protected boolean isActive(String sourceURL)
        throws RepositoryException, MalformedQueryException, QueryEvaluationException {
    boolean t = true;
    SPARQLRepository repo = new SPARQLRepository(sourceURL);
    repo.initialize();
    RepositoryConnection con = repo.getConnection();
    TupleQuery tupleQuery = con.prepareTupleQuery(QueryLanguage.SPARQL, "SELECT * WHERE{ ?s ?p ?o . } LIMIT 1");
    tupleQuery.setMaxExecutionTime(120);
    TupleQueryResult result = tupleQuery.evaluate();
    if (!result.hasNext()) {
        t = false;
    }
    con.close();
    result.close();
    repo.shutDown();
    return t;
}

public void map(LongWritable key, Text value, Context context) throws InterruptedException, IOException {
    String src = value.toString();
    String val = "null";
    try {
        boolean b = isActive(src); 
        if (b) {
            val = "active";
        } else {
            val = "inactive";
        }
    } catch (MalformedQueryException e) {
        e.printStackTrace();
    } catch (RepositoryException e) {
        e.printStackTrace();
    } catch (QueryEvaluationException e) {
        e.printStackTrace();
    }
    context.write(new Text(src), new Text(val));
}
}

The input is a TextInputFormat and it looks like this:
http://visualdataweb.infor.uva.es/sparql
...

The output is a TextOutputFormat and I'm getting this:
http://visualdataweb.infor.uva.es/sparql null
...

Edit1: as suggested by @james-leigh and @ChristophE I used try-with-resource statements but no results yet:

public class DMap extends Mapper<LongWritable, Text, Text, Text> {

    public void map(LongWritable key, Text value, Context context) throws InterruptedException, IOException {
        String src = value.toString(), val = "";
        SPARQLRepository repo = new SPARQLRepository(src);
        repo.initialize();
        try (RepositoryConnection con = repo.getConnection()) {
            TupleQuery tupleQuery = con.prepareTupleQuery(QueryLanguage.SPARQL, "SELECT * WHERE { ?s ?p ?o . } LIMIT 1");
            tupleQuery.setMaxExecutionTime(120);
            try (TupleQueryResult result = tupleQuery.evaluate()) {
                if (!result.hasNext()) {
                    val = "inactive";
                } else {
                    val = "active";
                }
            }

        }
        repo.shutDown();
        context.write(new Text(src), new Text(val));

    }

}  

Thanks

1

There are 1 answers

3
James Leigh On

Use try-with-resource statements. SPRAQLRepository uses background threads that must be cleaned up properly.