snappy_unittest -norun_microbenchmarks -lzo testdata/* : Īt .Throwables.propagate(Throwables.java:160) ~Īt io.HadoopTask.invokeForeignLoader(HadoopTask.java:160) ~Īt io.n(HadoopIndexTask.java:175) ~Īt io.$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:338) Ĭaused by: : native snappy library not available: this version of libhadoop was built without snappy support.Īt .compress.Snapp圜odec.checkNativeCodeLoaded(Snapp圜odec.java:65) ~Īt .compress.Snapp圜odec.getDecompressorType(Snapp圜odec.java:193) ~Īt .(CodecPool.~/snappy-read-only $. T20:35:27,643 ERROR io.: Exception while running task T20:35:27,591 INFO io.: Job completed, loading up partitions for intervals)]. While doing so, I get a snappy loading error: After it does, it looks like a Peon process attempts to read what was written. Now I’m to the point where an indexing task successfully completes a MapReduce job and writes out snappy files in /tmp/druid-indexing. I had gotten an error about not being able to load snappy in the middleManager earlier this week, but got around it by setting LD_LIBRARY_PATH in the middleManager’s environment. We force the Hadoop java processes to load the native snappy libraries by setting LD_LIBRARY_PATH=/usr/lib/hadoop/lib/native. After wading through and working around various dependency issues, I’m hitting a wall. I’m trying to set up Druid with Hadoop 2.6.0 CDH 5.5.2.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |