SAMPLE CUBE FAILS

classic Classic list List threaded Threaded
22 messages Options
12
Reply | Threaded
Open this post in threaded view
|

SAMPLE CUBE FAILS

sdangi
Team -- I'm very new to Apache Kylin project. I have build the following

incubator-kylin-1.x-HBase1.x-a608f5b against 1.1 version of HBase. I have also imported the sample cube. However, while trying to build the cube it fails (see attached) at "#2 Step Name: Extract Fact Table Distinct Columns".  Inspecting the log files - it shows the below stack trace

2015-10-13 12:33:24,904 WARN [main] org.apache.hadoop.metrics2.impl.MetricsConfig: Cannot locate configuration: tried hadoop-metrics2-maptask.properties,hadoop-metrics2.properties
2015-10-13 12:33:24,981 INFO [main] org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2015-10-13 12:33:24,981 INFO [main] org.apache.hadoop.metrics2.impl.MetricsSystemImpl: MapTask metrics system started
2015-10-13 12:33:25,005 INFO [main] org.apache.hadoop.mapred.YarnChild: Executing with tokens:
2015-10-13 12:33:25,005 INFO [main] org.apache.hadoop.mapred.YarnChild: Kind: mapreduce.job, Service: job_1444752922077_0003, Ident: (org.apache.hadoop.mapreduce.security.token.JobTokenIdentifier@306f16f3)
2015-10-13 12:33:25,304 INFO [main] org.apache.hadoop.mapred.YarnChild: Sleeping for 0ms before retrying again. Got null now.
2015-10-13 12:33:25,703 INFO [main] org.apache.hadoop.mapred.YarnChild: mapreduce.cluster.local.dir for child: /hadoop/yarn/local/usercache/root/appcache/application_1444752922077_0003
2015-10-13 12:33:26,013 INFO [main] org.apache.hadoop.conf.Configuration.deprecation: session.id is deprecated. Instead, use dfs.metrics.session-id
2015-10-13 12:33:26,884 INFO [main] org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: File Output Committer Algorithm version is 1
2015-10-13 12:33:26,914 INFO [main] org.apache.hadoop.mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
2015-10-13 12:33:26,976 FATAL [main] org.apache.hadoop.mapred.YarnChild: Error running child : java.lang.NoClassDefFoundError: org/apache/kylin/common/mr/KylinMapper
        at java.lang.ClassLoader.defineClass1(Native Method)
        at java.lang.ClassLoader.defineClass(ClassLoader.java:760)
        at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
        at java.net.URLClassLoader.defineClass(URLClassLoader.java:467)
        at java.net.URLClassLoader.access$100(URLClassLoader.java:73)
        at java.net.URLClassLoader$1.run(URLClassLoader.java:368)
        at java.net.URLClassLoader$1.run(URLClassLoader.java:362)
        at java.security.AccessController.doPrivileged(Native Method)
        at java.net.URLClassLoader.findClass(URLClassLoader.java:361)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
        at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
        at java.lang.Class.forName0(Native Method)
        at java.lang.Class.forName(Class.java:348)
        at org.apache.hadoop.conf.Configuration.getClassByNameOrNull(Configuration.java:2134)
        at org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2099)
        at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2193)
        at org.apache.hadoop.mapreduce.task.JobContextImpl.getMapperClass(JobContextImpl.java:186)
        at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:745)
        at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
        at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
        at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
Caused by: java.lang.ClassNotFoundException: org.apache.kylin.common.mr.KylinMapper
        at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
        at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
        ... 25 more

Any idea where I could possibly have messed up?

Thanks,


Reply | Threaded
Open this post in threaded view
|

Re: SAMPLE CUBE FAILS

Shi, Shaofeng
Hi Sdangi, Are you running Kylin from IDE? or you made a binary package
and runn it with ³kylin.sh start²?

On 10/14/15, 12:58 AM, "sdangi" <[hidden email]> wrote:

><http://apache-kylin-incubating.74782.x6.nabble.com/file/n1936/Screen_Shot
>_2015-10-13_at_1.png>
>Team -- I'm very new to Apache Kylin project. I have build the following
>
>incubator-kylin-1.x-HBase1.x-a608f5b against 1.1 version of HBase. I have
>also imported the sample cube. However, while trying to build the cube it
>fails (see attached) at "#2 Step Name: Extract Fact Table Distinct
>Columns".
>Inspecting the log files - it shows the below stack trace
>
>2015-10-13 12:33:24,904 WARN [main]
>org.apache.hadoop.metrics2.impl.MetricsConfig: Cannot locate
>configuration:
>tried hadoop-metrics2-maptask.properties,hadoop-metrics2.properties
>2015-10-13 12:33:24,981 INFO [main]
>org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot
>period
>at 10 second(s).
>2015-10-13 12:33:24,981 INFO [main]
>org.apache.hadoop.metrics2.impl.MetricsSystemImpl: MapTask metrics system
>started
>2015-10-13 12:33:25,005 INFO [main] org.apache.hadoop.mapred.YarnChild:
>Executing with tokens:
>2015-10-13 12:33:25,005 INFO [main] org.apache.hadoop.mapred.YarnChild:
>Kind: mapreduce.job, Service: job_1444752922077_0003, Ident:
>(org.apache.hadoop.mapreduce.security.token.JobTokenIdentifier@306f16f3)
>2015-10-13 12:33:25,304 INFO [main] org.apache.hadoop.mapred.YarnChild:
>Sleeping for 0ms before retrying again. Got null now.
>2015-10-13 12:33:25,703 INFO [main] org.apache.hadoop.mapred.YarnChild:
>mapreduce.cluster.local.dir for child:
>/hadoop/yarn/local/usercache/root/appcache/application_1444752922077_0003
>2015-10-13 12:33:26,013 INFO [main]
>org.apache.hadoop.conf.Configuration.deprecation: session.id is
>deprecated.
>Instead, use dfs.metrics.session-id
>2015-10-13 12:33:26,884 INFO [main]
>org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: File Output
>Committer Algorithm version is 1
>2015-10-13 12:33:26,914 INFO [main] org.apache.hadoop.mapred.Task:  Using
>ResourceCalculatorProcessTree : [ ]
>2015-10-13 12:33:26,976 FATAL [main] org.apache.hadoop.mapred.YarnChild:
>Error running child : java.lang.NoClassDefFoundError:
>org/apache/kylin/common/mr/KylinMapper
> at java.lang.ClassLoader.defineClass1(Native Method)
> at java.lang.ClassLoader.defineClass(ClassLoader.java:760)
> at
>java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
> at java.net.URLClassLoader.defineClass(URLClassLoader.java:467)
> at java.net.URLClassLoader.access$100(URLClassLoader.java:73)
> at java.net.URLClassLoader$1.run(URLClassLoader.java:368)
> at java.net.URLClassLoader$1.run(URLClassLoader.java:362)
> at java.security.AccessController.doPrivileged(Native Method)
> at java.net.URLClassLoader.findClass(URLClassLoader.java:361)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
> at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
> at java.lang.Class.forName0(Native Method)
> at java.lang.Class.forName(Class.java:348)
> at
>org.apache.hadoop.conf.Configuration.getClassByNameOrNull(Configuration.ja
>va:2134)
> at
>org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:209
>9)
> at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2193)
> at
>org.apache.hadoop.mapreduce.task.JobContextImpl.getMapperClass(JobContextI
>mpl.java:186)
> at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:745)
> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
> at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at
>org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.
>java:1657)
> at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
>Caused by: java.lang.ClassNotFoundException:
>org.apache.kylin.common.mr.KylinMapper
> at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
> at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
> ... 25 more
>
>Any idea where I could possibly have messed up?
>
>Thanks,
>
>
>
>
>
>
>--
>View this message in context:
>http://apache-kylin-incubating.74782.x6.nabble.com/SAMPLE-CUBE-FAILS-tp193
>6.html
>Sent from the Apache Kylin (Incubating) mailing list archive at
>Nabble.com.

Reply | Threaded
Open this post in threaded view
|

Re: SAMPLE CUBE FAILS

sdangi
Binary package was built using Maven (see below) INFO] --- maven-assembly-plugin:2.5.5:single (make-assembly) @ kylin-monitor --- [WARNING] Artifact: org.apache.kylin:kylin-monitor:jar:1.1-incubating-SNAPSHOT references the same file as the assembly destination file. Moving it to a temporary location for inclusion. [INFO] Building jar: /incubator-kylin-1.x-HBase1.x-a608f5b/monitor/target/kylin-monitor-1.1-incubating-SNAPSHOT.jar [WARNING] Configuration options: 'appendAssemblyId' is set to false, and 'classifier' is missing. Instead of attaching the assembly file: /incubator-kylin-1.x-HBase1.x-a608f5b/monitor/target/kylin-monitor-1.1-incubating-SNAPSHOT.jar, it will become the file for main project artifact. NOTE: If multiple descriptors or descriptor-formats are provided for this project, the value of this file will be non-deterministic! [WARNING] Replacing pre-existing project main-artifact file: /incubator-kylin-1.x-HBase1.x-a608f5b/monitor/target/archive-tmp/kylin-monitor-1.1-incubating-SNAPSHOT.jar with assembly file: /incubator-kylin-1.x-HBase1.x-a608f5b/monitor/target/kylin-monitor-1.1-incubating-SNAPSHOT.jar [INFO] [INFO] --- maven-jar-plugin:2.4:test-jar (default) @ kylin-monitor --- [INFO] ------------------------------------------------------------------------ [INFO] Reactor Summary: [INFO] [INFO] Kylin:HadoopOLAPEngine ............................. SUCCESS [ 4.959 s] [INFO] Kylin:AtopCalcite .................................. SUCCESS [ 4.208 s] [INFO] Kylin:Common ....................................... SUCCESS [ 11.630 s] [INFO] Kylin:Metadata ..................................... SUCCESS [ 1.887 s] [INFO] Kylin:Dictionary ................................... SUCCESS [ 1.957 s] [INFO] Kylin:Cube ......................................... SUCCESS [ 2.842 s] [INFO] Kylin:InvertedIndex ................................ SUCCESS [ 0.789 s] [INFO] Kylin:Job .......................................... SUCCESS [ 8.129 s] [INFO] Kylin:Storage ...................................... SUCCESS [ 3.078 s] [INFO] Kylin:Query ........................................ SUCCESS [ 1.921 s] [INFO] Kylin:JDBC ......................................... SUCCESS [ 3.774 s] [INFO] Kylin:RESTServer ................................... SUCCESS [ 14.141 s] [INFO] Kylin:Monitor ...................................... SUCCESS [ 1.497 s] [INFO] ------------------------------------------------------------------------ [INFO] BUILD SUCCESS [INFO] ------------------------------------------------------------------------ [INFO] Total time: 01:01 min [INFO] Finished at: 2015-10-13T23:06:31-04:00 [INFO] Final Memory: 143M/1848M [INFO] ------------------------------------------------------------------------ [root@worker1 incubator-kylin-1.x-HBase1.x-a608f5b]# then launched Kylin thru kylin.sh Let me know if you see anything wrong in doing that. Thanks, Regards,
Reply | Threaded
Open this post in threaded view
|

Re: SAMPLE CUBE FAILS

Yang
In reply to this post by Shi, Shaofeng
And steps guided you to the error? Need to know more about your env setup
to be able to help.

On Wed, Oct 14, 2015 at 9:18 AM, Shi, Shaofeng <[hidden email]> wrote:

> Hi Sdangi, Are you running Kylin from IDE? or you made a binary package
> and runn it with ³kylin.sh start²?
>
> On 10/14/15, 12:58 AM, "sdangi" <[hidden email]> wrote:
>
> ><
> http://apache-kylin-incubating.74782.x6.nabble.com/file/n1936/Screen_Shot
> >_2015-10-13_at_1.png>
> >Team -- I'm very new to Apache Kylin project. I have build the following
> >
> >incubator-kylin-1.x-HBase1.x-a608f5b against 1.1 version of HBase. I have
> >also imported the sample cube. However, while trying to build the cube it
> >fails (see attached) at "#2 Step Name: Extract Fact Table Distinct
> >Columns".
> >Inspecting the log files - it shows the below stack trace
> >
> >2015-10-13 12:33:24,904 WARN [main]
> >org.apache.hadoop.metrics2.impl.MetricsConfig: Cannot locate
> >configuration:
> >tried hadoop-metrics2-maptask.properties,hadoop-metrics2.properties
> >2015-10-13 12:33:24,981 INFO [main]
> >org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot
> >period
> >at 10 second(s).
> >2015-10-13 12:33:24,981 INFO [main]
> >org.apache.hadoop.metrics2.impl.MetricsSystemImpl: MapTask metrics system
> >started
> >2015-10-13 12:33:25,005 INFO [main] org.apache.hadoop.mapred.YarnChild:
> >Executing with tokens:
> >2015-10-13 12:33:25,005 INFO [main] org.apache.hadoop.mapred.YarnChild:
> >Kind: mapreduce.job, Service: job_1444752922077_0003, Ident:
> >(org.apache.hadoop.mapreduce.security.token.JobTokenIdentifier@306f16f3)
> >2015-10-13 12:33:25,304 INFO [main] org.apache.hadoop.mapred.YarnChild:
> >Sleeping for 0ms before retrying again. Got null now.
> >2015-10-13 12:33:25,703 INFO [main] org.apache.hadoop.mapred.YarnChild:
> >mapreduce.cluster.local.dir for child:
> >/hadoop/yarn/local/usercache/root/appcache/application_1444752922077_0003
> >2015-10-13 12:33:26,013 INFO [main]
> >org.apache.hadoop.conf.Configuration.deprecation: session.id is
> >deprecated.
> >Instead, use dfs.metrics.session-id
> >2015-10-13 12:33:26,884 INFO [main]
> >org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: File Output
> >Committer Algorithm version is 1
> >2015-10-13 12:33:26,914 INFO [main] org.apache.hadoop.mapred.Task:  Using
> >ResourceCalculatorProcessTree : [ ]
> >2015-10-13 12:33:26,976 FATAL [main] org.apache.hadoop.mapred.YarnChild:
> >Error running child : java.lang.NoClassDefFoundError:
> >org/apache/kylin/common/mr/KylinMapper
> >       at java.lang.ClassLoader.defineClass1(Native Method)
> >       at java.lang.ClassLoader.defineClass(ClassLoader.java:760)
> >       at
> >java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
> >       at java.net.URLClassLoader.defineClass(URLClassLoader.java:467)
> >       at java.net.URLClassLoader.access$100(URLClassLoader.java:73)
> >       at java.net.URLClassLoader$1.run(URLClassLoader.java:368)
> >       at java.net.URLClassLoader$1.run(URLClassLoader.java:362)
> >       at java.security.AccessController.doPrivileged(Native Method)
> >       at java.net.URLClassLoader.findClass(URLClassLoader.java:361)
> >       at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
> >       at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
> >       at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
> >       at java.lang.Class.forName0(Native Method)
> >       at java.lang.Class.forName(Class.java:348)
> >       at
> >org.apache.hadoop.conf.Configuration.getClassByNameOrNull(Configuration.ja
> >va:2134)
> >       at
> >org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:209
> >9)
> >       at
> org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2193)
> >       at
> >org.apache.hadoop.mapreduce.task.JobContextImpl.getMapperClass(JobContextI
> >mpl.java:186)
> >       at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:745)
> >       at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
> >       at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
> >       at java.security.AccessController.doPrivileged(Native Method)
> >       at javax.security.auth.Subject.doAs(Subject.java:422)
> >       at
> >org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.
> >java:1657)
> >       at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
> >Caused by: java.lang.ClassNotFoundException:
> >org.apache.kylin.common.mr.KylinMapper
> >       at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
> >       at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
> >       at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
> >       at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
> >       ... 25 more
> >
> >Any idea where I could possibly have messed up?
> >
> >Thanks,
> >
> >
> >
> >
> >
> >
> >--
> >View this message in context:
> >
> http://apache-kylin-incubating.74782.x6.nabble.com/SAMPLE-CUBE-FAILS-tp193
> >6.html
> >Sent from the Apache Kylin (Incubating) mailing list archive at
> >Nabble.com.
>
>
Reply | Threaded
Open this post in threaded view
|

Re: SAMPLE CUBE FAILS

sdangi
1) Downloaded the incubator-kylin-1.x-HBase1.x-a608f5b from gitHub as i have a dependency on 1.x version of HBase
2) mvn clean install -DskipTests ( i had to fix some RAT issues)
3) Installed binary package kylin-1.0-incubating and replaced JAR files

Is that the right approach?  

[root@worker1 lib]# ls -ltr ky*.jar
-rw-r--r--. 1 root root 199272 Sep 25 18:03 kylin-storage-1.1-incubating-SNAPSHOT.jar
-rw-r--r--. 1 root root 199705 Sep 25 18:13 kylin-common-1.1-incubating-SNAPSHOT.jar
-rw-r--r--. 1 root root 138574 Sep 25 18:13 kylin-metadata-1.1-incubating-SNAPSHOT.jar
-rw-r--r--. 1 root root  81345 Sep 25 18:13 kylin-dictionary-1.1-incubating-SNAPSHOT.jar
-rw-r--r--. 1 root root  68945 Sep 25 18:13 kylin-invertedindex-1.1-incubating-SNAPSHOT.jar
-rw-r--r--. 1 root root 162440 Sep 25 18:13 kylin-cube-1.1-incubating-SNAPSHOT.jar
-rw-r--r--. 1 root root 319584 Sep 25 18:13 kylin-job-1.1-incubating-SNAPSHOT.jar
-rw-r--r--. 1 root root 111297 Sep 25 18:22 kylin-query-1.1-incubating-SNAPSHOT.jar


cp -r server/src/main/webapp/WEB-INF webapp/kylin/WEB-INF




I am not too sure about the step 3.  
Reply | Threaded
Open this post in threaded view
|

Re: SAMPLE CUBE FAILS

Shi, Shaofeng
Replace JAR file is not good; please build a totally new binary package by
execute:

./script/package.sh

On 10/14/15, 12:52 PM, "sdangi" <[hidden email]> wrote:

>1) Downloaded the incubator-kylin-1.x-HBase1.x-a608f5b from gitHub as i
>have
>a dependency on 1.x version of HBase
>2) mvn clean install -DskipTests ( i had to fix some RAT issues)
>3) Installed binary package kylin-1.0-incubating and replaced JAR files
>
>Is that the right approach?
>
>[root@worker1 lib]# ls -ltr ky*.jar
>-rw-r--r--. 1 root root 199272 Sep 25 18:03
>kylin-storage-1.1-incubating-SNAPSHOT.jar
>-rw-r--r--. 1 root root 199705 Sep 25 18:13
>kylin-common-1.1-incubating-SNAPSHOT.jar
>-rw-r--r--. 1 root root 138574 Sep 25 18:13
>kylin-metadata-1.1-incubating-SNAPSHOT.jar
>-rw-r--r--. 1 root root  81345 Sep 25 18:13
>kylin-dictionary-1.1-incubating-SNAPSHOT.jar
>-rw-r--r--. 1 root root  68945 Sep 25 18:13
>kylin-invertedindex-1.1-incubating-SNAPSHOT.jar
>-rw-r--r--. 1 root root 162440 Sep 25 18:13
>kylin-cube-1.1-incubating-SNAPSHOT.jar
>-rw-r--r--. 1 root root 319584 Sep 25 18:13
>kylin-job-1.1-incubating-SNAPSHOT.jar
>-rw-r--r--. 1 root root 111297 Sep 25 18:22
>kylin-query-1.1-incubating-SNAPSHOT.jar
>
>
>cp -r server/src/main/webapp/WEB-INF webapp/kylin/WEB-INF
>
>
>
>
>I am not too sure about the step 3.
>
>
>
>--
>View this message in context:
>http://apache-kylin-incubating.74782.x6.nabble.com/SAMPLE-CUBE-FAILS-tp193
>6p1946.html
>Sent from the Apache Kylin (Incubating) mailing list archive at
>Nabble.com.

Reply | Threaded
Open this post in threaded view
|

Re: SAMPLE CUBE FAILS

Marek Wiewiorka
I'm experiencing the same issue using hbase 1.x branch.

2015-10-14 7:22 GMT+02:00 Shi, Shaofeng <[hidden email]>:

> Replace JAR file is not good; please build a totally new binary package by
> execute:
>
> ./script/package.sh
>
> On 10/14/15, 12:52 PM, "sdangi" <[hidden email]> wrote:
>
> >1) Downloaded the incubator-kylin-1.x-HBase1.x-a608f5b from gitHub as i
> >have
> >a dependency on 1.x version of HBase
> >2) mvn clean install -DskipTests ( i had to fix some RAT issues)
> >3) Installed binary package kylin-1.0-incubating and replaced JAR files
> >
> >Is that the right approach?
> >
> >[root@worker1 lib]# ls -ltr ky*.jar
> >-rw-r--r--. 1 root root 199272 Sep 25 18:03
> >kylin-storage-1.1-incubating-SNAPSHOT.jar
> >-rw-r--r--. 1 root root 199705 Sep 25 18:13
> >kylin-common-1.1-incubating-SNAPSHOT.jar
> >-rw-r--r--. 1 root root 138574 Sep 25 18:13
> >kylin-metadata-1.1-incubating-SNAPSHOT.jar
> >-rw-r--r--. 1 root root  81345 Sep 25 18:13
> >kylin-dictionary-1.1-incubating-SNAPSHOT.jar
> >-rw-r--r--. 1 root root  68945 Sep 25 18:13
> >kylin-invertedindex-1.1-incubating-SNAPSHOT.jar
> >-rw-r--r--. 1 root root 162440 Sep 25 18:13
> >kylin-cube-1.1-incubating-SNAPSHOT.jar
> >-rw-r--r--. 1 root root 319584 Sep 25 18:13
> >kylin-job-1.1-incubating-SNAPSHOT.jar
> >-rw-r--r--. 1 root root 111297 Sep 25 18:22
> >kylin-query-1.1-incubating-SNAPSHOT.jar
> >
> >
> >cp -r server/src/main/webapp/WEB-INF webapp/kylin/WEB-INF
> >
> >
> >
> >
> >I am not too sure about the step 3.
> >
> >
> >
> >--
> >View this message in context:
> >
> http://apache-kylin-incubating.74782.x6.nabble.com/SAMPLE-CUBE-FAILS-tp193
> >6p1946.html
> >Sent from the Apache Kylin (Incubating) mailing list archive at
> >Nabble.com.
>
>
Reply | Threaded
Open this post in threaded view
|

Re: SAMPLE CUBE FAILS

sdangi
In reply to this post by Shi, Shaofeng
Thanks. That helped. Ran package and It has now failed at

#13 Step Name: Create HTable
Duration: 0.01 mins


With result code:2

Params:
-cubename kylin_sales_cube
-input hdfs://worker1.sofiatechnology.com:8020/kylin/kylin_metadata/kylin-a396ce33-b48c-4f9e-a9aa-ce5597c026d5/kylin_sales_cube/rowkey_stats/part-r-00000
-htablename KYLIN_FFXS1SWJ7F

I will try to look at HBase logs.  But if you have seen this before and can provide possible pointers, that would be great.


Thanks
Regards,



Reply | Threaded
Open this post in threaded view
|

Re: SAMPLE CUBE FAILS

Henry Saputra
HI Shailesh,

Welcome to Apache Kylin.

Could you send subscribe email to
[hidden email] to subscribe to this list?

Since you are not subscribed and the dev list is moderated I have to
keep approving  your emails.

Thanks,

Henry

On Wed, Oct 14, 2015 at 6:56 AM, sdangi <[hidden email]> wrote:

> Thanks. That helped. Ran package and It has now failed at
>
> #13 Step Name: Create HTable
> Duration: 0.01 mins
>
>
> *With result code:2*
>
> Params:
> -cubename kylin_sales_cube
> -input
> hdfs://worker1.sofiatechnology.com:8020/kylin/kylin_metadata/kylin-a396ce33-b48c-4f9e-a9aa-ce5597c026d5/kylin_sales_cube/rowkey_stats/part-r-00000
> -htablename KYLIN_FFXS1SWJ7F
>
> I will try to look at HBase logs.  But if you have seen this before and can
> provide possible pointers, that would be great.
>
>
> Thanks
> Regards,
>
>
>
>
>
>
>
> --
> View this message in context: http://apache-kylin-incubating.74782.x6.nabble.com/SAMPLE-CUBE-FAILS-tp1936p1960.html
> Sent from the Apache Kylin (Incubating) mailing list archive at Nabble.com.
Reply | Threaded
Open this post in threaded view
|

Re: SAMPLE CUBE FAILS

sdangi
Hi Henry -- I'm not sure what the ask is. I tried sending email to [hidden email] and I'm not able to send you an email.

Can you please send me an email directly to sdangi@datalenz.com?

Thanks,
Regards,
Reply | Threaded
Open this post in threaded view
|

Re: SAMPLE CUBE FAILS

Luke Han
Administrator
Hi Shailesh and Merek,

Try this:

1. Cone latest code and build binary package follow guide, do not using
maven one
2. Ensure the account running Kylin has write/read permission to
HDFS/Hive/HBase
3. Check kylin.log for any exception

and please share more detail information even log here for us to well
understand and help you.

Thanks.


Best Regards!
---------------------

Luke Han

On Wed, Oct 14, 2015 at 10:44 PM, sdangi <[hidden email]> wrote:

> Hi Henry -- I'm not sure what the ask is. I tried sending email to [hidden
> email] and I'm not able to send you an email.
>
> Can you please send me an email directly to [hidden email]?
>
> Thanks,
> Regards,
>
>
>
> --
> View this message in context:
> http://apache-kylin-incubating.74782.x6.nabble.com/SAMPLE-CUBE-FAILS-tp1936p1962.html
> Sent from the Apache Kylin (Incubating) mailing list archive at Nabble.com.
>
Reply | Threaded
Open this post in threaded view
|

Re: SAMPLE CUBE FAILS

sdangi
Hi Luke -- Can you point to the github branch compatible w/ HBase 1.x  to pull the latest code?  

http://www.apache.org/dyn/closer.cgi/incubator/kylin/apache-kylin-1.0-incubating/  binary does not work q/ HBase 1.x and kylin.sh fails to start with MethodNotFound exceptions due to version incompatibilities.

I did check the kylin.log based on the package i created earlier [from incubator-kylin-1.x-HBase1.x-a608f5b branch] and I do see this

 others
+------------------------------------------------------------------------------------------------------+
| Create HTable                                                                                        |
+------------------------------------------------------------------------------------------------------+
[pool-7-thread-10]:[2015-10-14 09:27:41,498][DEBUG][org.apache.kylin.common.persistence.ResourceStore.putResource(ResourceStore.java:195)] - Saving resource /execute_output/a396ce33-b48c-4f9e-a9aa-ce5597c026d5-12 (Store kylin_metadata@hbase)
[pool-7-thread-10]:[2015-10-14 09:27:41,501][INFO][org.apache.kylin.job.manager.ExecutableManager.updateJobOutput(ExecutableManager.java:241)] - job id:a396ce33-b48c-4f9e-a9aa-ce5597c026d5-12 from READY to RUNNING
[pool-7-thread-10]:[2015-10-14 09:27:41,502][INFO][org.apache.kylin.job.common.HadoopShellExecutable.doWork(HadoopShellExecutable.java:57)] - parameters of the HadoopShellExecutable:
[pool-7-thread-10]:[2015-10-14 09:27:41,502][INFO][org.apache.kylin.job.common.HadoopShellExecutable.doWork(HadoopShellExecutable.java:58)] -  -cubename kylin_sales_cube -input hdfs://worker1.sofiatechnology.com:8020/kylin/kylin_metadata/kylin-a396ce33-b48c-4f9e-a9aa-ce5597c026d5/kylin_sales_cube/rowkey_stats/part-r-00000 -htablename KYLIN_FFXS1SWJ7F
2015-10-14 09:27:41,583 INFO  [pool-7-thread-10] zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x75afe43a connecting to ZooKeeper ensemble=worker2.sofiatechnology.com:2181,worker1.sofiatechnology.com:2181
2015-10-14 09:27:41,585 INFO  [pool-7-thread-10] zookeeper.ZooKeeper: Initiating client connection, connectString=worker2.sofiatechnology.com:2181,worker1.sofiatechnology.com:2181 sessionTimeout=90000 watcher=hconnection-0x75afe43a0x0, quorum=worker2.sofiatechnology.com:2181,worker1.sofiatechnology.com:2181, baseZNode=/hbase-unsecure
2015-10-14 09:27:41,587 INFO  [pool-7-thread-10-SendThread(worker1.sofiatechnology.com:2181)] zookeeper.ClientCnxn: Opening socket connection to server worker1.sofiatechnology.com/192.168.1.5:2181. Will not attempt to authenticate using SASL (unknown error)
2015-10-14 09:27:41,588 INFO  [pool-7-thread-10-SendThread(worker1.sofiatechnology.com:2181)] zookeeper.ClientCnxn: Socket connection established to worker1.sofiatechnology.com/192.168.1.5:2181, initiating session
2015-10-14 09:27:41,737 INFO  [pool-7-thread-10-SendThread(worker1.sofiatechnology.com:2181)] zookeeper.ClientCnxn: Session establishment complete on server worker1.sofiatechnology.com/192.168.1.5:2181, sessionid = 0x15061f90d4f0013, negotiated timeout = 40000
[pool-7-thread-10]:[2015-10-14 09:27:41,742][INFO][org.apache.kylin.job.hadoop.hbase.CreateHTableJob.run(CreateHTableJob.java:101)] - hbase will use snappy to compress data
[pool-7-thread-10]:[2015-10-14 09:27:41,743][INFO][org.apache.kylin.job.hadoop.hbase.CreateHTableJob.run(CreateHTableJob.java:101)] - hbase will use snappy to compress data
2015-10-14 09:27:41,768 INFO  [pool-7-thread-10] compress.CodecPool: Got brand-new decompressor [.snappy]
2015-10-14 09:27:41,768 INFO  [pool-7-thread-10] compress.CodecPool: Got brand-new decompressor [.snappy]
2015-10-14 09:27:41,769 INFO  [pool-7-thread-10] compress.CodecPool: Got brand-new decompressor [.snappy]
2015-10-14 09:27:41,769 INFO  [pool-7-thread-10] compress.CodecPool: Got brand-new decompressor [.snappy]
[pool-7-thread-10]:[2015-10-14 09:27:41,773][INFO][org.apache.kylin.job.hadoop.hbase.CreateHTableJob.getSplits(CreateHTableJob.java:188)] - 1 regions
[pool-7-thread-10]:[2015-10-14 09:27:41,773][INFO][org.apache.kylin.job.hadoop.hbase.CreateHTableJob.getSplits(CreateHTableJob.java:189)] - 0 splits
[pool-7-thread-10]:[2015-10-14 09:27:41,852][INFO][org.apache.kylin.job.tools.DeployCoprocessorCLI.addCoprocessorOnHTable(DeployCoprocessorCLI.java:119)] - Add coprocessor on KYLIN_FFXS1SWJ7F
[pool-7-thread-10]:[2015-10-14 09:27:41,853][INFO][org.apache.kylin.job.tools.DeployCoprocessorCLI.deployCoprocessor(DeployCoprocessorCLI.java:99)] - hbase table [B@247d6db6 deployed with coprocessor.
usage: CreateHTableJob
 -cubename <name>            Cube name. For exmaple, flat_item_cube
 -htablename <htable name>   HTable name
 -input <path>               Partition file path.
org.apache.hadoop.hbase.DoNotRetryIOException: org.apache.hadoop.hbase.DoNotRetryIOException: File /tmp/hbase-hbase/local/jars/tmp does not exist Set hbase.table.sanity.checks to false at conf or table descriptor if you want to bypass sanity checks
        at org.apache.hadoop.hbase.master.HMaster.warnOrThrowExceptionForFailure(HMaster.java:1597)
        at org.apache.hadoop.hbase.master.HMaster.sanityCheckTableDescriptor(HMaster.java:1529)
        at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1448)
        at org.apache.hadoop.hbase.master.MasterRpcServices.createTable(MasterRpcServices.java:422)
        at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:48502)
        at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)
        at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
        at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
        at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
        at java.lang.Thread.run(Thread.java:745)

        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
        at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
        at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:95)
        at org.apache.hadoop.hbase.client.RpcRetryingCaller.translateException(RpcRetryingCaller.java:226)
        at org.apache.hadoop.hbase.client.RpcRetryingCaller.translateException(RpcRetryingCaller.java:240)


Any idea?

Thanks,
Regards,
Reply | Threaded
Open this post in threaded view
|

Re: SAMPLE CUBE FAILS

sdangi
I just created /tmp directory based on the earlier error, not sure if it we are really required to do but - it is now failing on "Permission Denied Error".  I'm logged in as ROOT.

usage: CreateHTableJob
 -cubename <name>            Cube name. For exmaple, flat_item_cube
 -htablename <htable name>   HTable name
 -input <path>               Partition file path.
org.apache.hadoop.hbase.DoNotRetryIOException: org.apache.hadoop.hbase.DoNotRetryIOException: /tmp/hbase-hbase/local/jars/tmp/.cff0f893-39e8-427c-b7c1-9d37dbdaf206.kylin-coprocessor-1.1-incubating-SNAPSHOT-1.jar.1444843461862.jar (Permission denied) Set hbase.table.sanity.checks to false at conf or table descriptor if you want to bypass sanity checks
        at org.apache.hadoop.hbase.master.HMaster.warnOrThrowExceptionForFailure(HMaster.java:1597)
        at org.apache.hadoop.hbase.master.HMaster.sanityCheckTableDescriptor(HMaster.java:1529)
        at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1448)
        at org.apache.hadoop.hbase.master.MasterRpcServices.createTable(MasterRpcServices.java:422)
        at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:48502)
        at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)
        at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
        at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
        at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
        at java.lang.Thread.run(Thread.java:745)

        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
        at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
        at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:95)
        at org.apache.hadoop.hbase.client.RpcRetryingCaller.translateException(RpcRetryingCaller.java:226)
        at org.apache.hadoop.hbase.client.RpcRetryingCaller.translateException(RpcRetryingCaller.java:240)
        at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:140)
        at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:3917)
        at org.apache.hadoop.hbase.client.HBaseAdmin.createTableAsyncV2(HBaseAdmin.java:636)
        at org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:557)
        at org.apache.kylin.job.hadoop.hbase.CreateHTableJob.run(CreateHTableJob.java:150)
        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
        at org.apache.kylin.job.common.HadoopShellExecutable.doWork(HadoopShellExecutable.java:62)
        at org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecutable.java:107)
        at org.apache.kylin.job.execution.DefaultChainedExecutable.doWork(DefaultChainedExecutable.java:51)
        at org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecutable.java:107)
        at org.apache.kylin.job.impl.threadpool.DefaultScheduler$JobRunner.run(DefaultScheduler.java:130)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)
Reply | Threaded
Open this post in threaded view
|

Re: SAMPLE CUBE FAILS

sdangi
Hi Luke et al  -- I followed your recommendations of cloning the latest. Doesn't work with HBase 1.x version.  

1) git clone https://github.com/apache/incubator-kylin kylin_latest
2) Added to POM to fix RAT issues.
                                <exclude>**/*.proto</exclude>
                                <exclude>**/*.exclude</exclude>
                                <exclude>**/*.expected</exclude>
3) Clean package and build
[root@worker1 kylin_latest]# ./script/package.sh
......
INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO]
[INFO] Kylin:HadoopOLAPEngine ............................. SUCCESS [  2.570 s]
[INFO] Kylin:AtopCalcite .................................. SUCCESS [  2.191 s]
[INFO] Kylin:Common ....................................... SUCCESS [  5.426 s]
[INFO] Kylin:Metadata ..................................... SUCCESS [  1.326 s]
[INFO] Kylin:Dictionary ................................... SUCCESS [  1.556 s]
[INFO] Kylin:Cube ......................................... SUCCESS [  2.362 s]
[INFO] Kylin:InvertedIndex ................................ SUCCESS [  0.604 s]
[INFO] Kylin:Job .......................................... SUCCESS [  4.227 s]
[INFO] Kylin:Storage ...................................... SUCCESS [  2.248 s]
[INFO] Kylin:Query ........................................ SUCCESS [  1.434 s]
[INFO] Kylin:JDBC ......................................... SUCCESS [  2.065 s]
[INFO] Kylin:RESTServer ................................... SUCCESS [  7.623 s]
[INFO] Kylin:Monitor ...................................... SUCCESS [  0.863 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 34.831 s
[INFO] Finished at: 2015-10-14T16:49:42-04:00
[INFO] Final Memory: 148M/2197M
[INFO] ------------------------------------------------------------------------
package front-end
...
...
...
..
Package ready dist/kylin-1.1-incubating-SNAPSHOT-bin.tar.gz

This package fails to start with the earlier error

81, sessionid = 0x15067463b2f0009, negotiated timeout = 40000
[localhost-startStop-1]:[2015-10-14 16:41:05,726][ERROR][org.springframework.web.context.ContextLoader.initWebApplicationContext(ContextLoader.java:307)] - Context initialization failed
org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerMapping#0': BeanPostProcessor before instantiation of bean failed; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'org.springframework.cache.config.internalCacheAdvisor': Cannot resolve reference to bean 'org.springframework.cache.annotation.AnnotationCacheOperationSource#0' while setting bean property 'cacheOperationSource'; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'org.springframework.cache.annotation.AnnotationCacheOperationSource#0': BeanPostProcessor before instantiation of bean failed; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'org.springframework.security.methodSecurityMetadataSourceAdvisor': Cannot resolve reference to bean 'org.springframework.security.access.method.DelegatingMethodSecurityMetadataSource#0' while setting constructor argument; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'org.springframework.security.access.method.DelegatingMethodSecurityMetadataSource#0': Cannot create inner bean '(inner bean)' of type [org.springframework.security.access.prepost.PrePostAnnotationSecurityMetadataSource] while setting constructor argument with key [0]; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name '(inner bean)': Cannot create inner bean '(inner bean)' of type [org.springframework.security.access.expression.method.ExpressionBasedAnnotationAttributeFactory] while setting constructor argument; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name '(inner bean)': Cannot resolve reference to bean 'expressionHandler' while setting constructor argument; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'expressionHandler' defined in class path resource [kylinSecurity.xml]: Cannot resolve reference to bean 'permissionEvaluator' while setting bean property 'permissionEvaluator'; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'permissionEvaluator' defined in class path resource [kylinSecurity.xml]: Cannot resolve reference to bean 'aclService' while setting constructor argument; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'aclService' defined in file [/home/worker1/kylin-1.1-incubating-SNAPSHOT/tomcat/webapps/kylin/WEB-INF/classes/org/apache/kylin/rest/service/AclService.class]: Instantiation of bean failed; nested exception is org.springframework.beans.BeanInstantiationException: Could not instantiate bean class [org.apache.kylin.rest.service.AclService]: Constructor threw exception; nested exception is java.lang.NoSuchMethodError: org.apache.hadoop.hbase.client.HBaseAdmin.<init>(Lorg/apache/hadoop/hbase/client/HConnection;)V        at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:452)        at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:294)        at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:225)        at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:291)        at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:193)        at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:609)        at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:918)        at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:469)        at org.springframework.web.context.ContextLoader.configureAndRefreshWebApplicationContext(ContextLoader.java:383)        at org.springframework.web.context.ContextLoader.initWebApplicationContext(ContextLoader.java:283)        at org.springframework.web.context.ContextLoaderListener.contextInitialized(ContextLoaderListener.java:111)        at org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:5016)
        at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5524)
        at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150)
        at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:901)
        at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:877)
        at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:649)
        at org.apache.catalina.startup.HostConfig.deployWAR(HostConfig.java:1081)
        at org.apache.catalina.startup.HostConfig$DeployWar.run(HostConfig.java:1877)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)

Reply | Threaded
Open this post in threaded view
|

Re: SAMPLE CUBE FAILS

sdangi
In reply to this post by Luke Han
Luke/Kylin Team -- Any further updates/guidance you could offer?  Latest clone does not work w/ 1.1 version of HBase.

We are working on a time sensitive POC for a financial client and appreciate your responses.

Thanks,
Regards,
Reply | Threaded
Open this post in threaded view
|

Re: SAMPLE CUBE FAILS

Luke Han
Administrator
Hi Shailesh,
    If timing is concern, we strongly suggest to downgrade your HBase to
0.98 with Kylin. the 1.x branch is not fully tested yet.

    If you still would like to try with HBase 1.x, please clone this branch:
    https://github.com/apache/incubator-kylin/tree/1.x-HBase1.x

    And, then run ./script/package.sh to generate binary package
    Then copy package from dist folder and install with your Hadoop cluster.

     BTW, which distribution you are using now? CDH or HDP?

    Thanks.

Luke


Best Regards!
---------------------

Luke Han

On Sat, Oct 17, 2015 at 8:29 AM, sdangi <[hidden email]> wrote:

> Luke/Kylin Team -- Any further updates/guidance you could offer?  Latest
> clone does not work w/ 1.1 version of HBase.
>
> We are working on a time sensitive POC for a financial client and
> appreciate
> your responses.
>
> Thanks,
> Regards,
>
>
>
> --
> View this message in context:
> http://apache-kylin-incubating.74782.x6.nabble.com/SAMPLE-CUBE-FAILS-tp1936p1994.html
> Sent from the Apache Kylin (Incubating) mailing list archive at Nabble.com.
>
Reply | Threaded
Open this post in threaded view
|

Re: SAMPLE CUBE FAILS

sdangi
Hi Luke -- I will be looking into this later today.  But here is the progress (or lack there of), so far

1)   cd /home/worker1/kylin/1.x-HBase1.x

2)   [root@worker1 1.x-HBase1.x]# git clone -b 1.x-HBase1.x https://github.com/apache/incubator-kylin.git .

[root@worker1 1.x-HBase1.x]# ls -ltr

total 88

-rw-r--r--  1 root root   849 Oct 20 10:13 README.md

-rw-r--r--  1 root root   180 Oct 20 10:13 NOTICE

-rw-r--r--  1 root root 12401 Oct 20 10:13 LICENSE

-rw-r--r--  1 root root  7290 Oct 20 10:13 KEYS

-rw-r--r--  1 root root   539 Oct 20 10:13 DISCLAIMER

drwxr-xr-x  4 root root    46 Oct 20 10:13 atopcalcite

drwxr-xr-x  4 root root    46 Oct 20 10:13 common

drwxr-xr-x  2 root root  4096 Oct 20 10:13 bin

drwxr-xr-x  2 root root    54 Oct 20 10:13 conf

drwxr-xr-x  4 root root    46 Oct 20 10:13 cube

drwxr-xr-x  4 root root    46 Oct 20 10:13 dictionary

drwxr-xr-x  2 root root    23 Oct 20 10:13 deploy

drwxr-xr-x  2 root root    22 Oct 20 10:13 docs

drwxr-xr-x  4 root root    62 Oct 20 10:13 examples

drwxr-xr-x  4 root root    46 Oct 20 10:13 invertedindex

drwxr-xr-x  4 root root    46 Oct 20 10:13 jdbc

drwxr-xr-x  4 root root    96 Oct 20 10:13 job

drwxr-xr-x  4 root root    46 Oct 20 10:13 metadata

drwxr-xr-x  4 root root    46 Oct 20 10:13 monitor

drwxr-xr-x  4 root root    46 Oct 20 10:13 query

-rw-r--r--  1 root root 39837 Oct 20 10:13 pom.xml

drwxr-xr-x  2 root root    98 Oct 20 10:13 script

drwxr-xr-x  4 root root    69 Oct 20 10:13 server

drwxr-xr-x  3 root root    17 Oct 20 10:13 src

drwxr-xr-x  4 root root    46 Oct 20 10:13 storage

drwxr-xr-x  3 root root  4096 Oct 20 10:13 webapp

drwxr-xr-x 16 root root  4096 Oct 20 10:13 website

 

 

 

Build usingMaven

INFO] --- maven-assembly-plugin:2.5.5:single (make-assembly) @ kylin-monitor ---

[WARNING] Artifact: org.apache.kylin:kylin-monitor:jar:1.1-incubating-SNAPSHOT references the same file as the assembly destination file. Moving it to a temporary location for inclusion.

[INFO] Building jar: /home/worker1/kylin/1.x-HBase1.x/monitor/target/kylin-monitor-1.1-incubating-SNAPSHOT.jar

[WARNING] Configuration options: 'appendAssemblyId' is set to false, and 'classifier' is missing.

Instead of attaching the assembly file: /home/worker1/kylin/1.x-HBase1.x/monitor/target/kylin-monitor-1.1-incubating-SNAPSHOT.jar, it will become the file for main project artifact.

NOTE: If multiple descriptors or descriptor-formats are provided for this project, the value of this file will be non-deterministic!

[WARNING] Replacing pre-existing project main-artifact file: /home/worker1/kylin/1.x-HBase1.x/monitor/target/archive-tmp/kylin-monitor-1.1-incubating-SNAPSHOT.jar

with assembly file: /home/worker1/kylin/1.x-HBase1.x/monitor/target/kylin-monitor-1.1-incubating-SNAPSHOT.jar

[INFO]

[INFO] --- maven-jar-plugin:2.4:test-jar (default) @ kylin-monitor ---

[INFO] Building jar: /home/worker1/kylin/1.x-HBase1.x/monitor/target/kylin-monitor-1.1-incubating-SNAPSHOT-tests.jar

[INFO] ------------------------------------------------------------------------

[INFO] Reactor Summary:

[INFO]

[INFO] Kylin:HadoopOLAPEngine ............................. SUCCESS [  0.864 s]

[INFO] Kylin:AtopCalcite .................................. SUCCESS [  5.439 s]

[INFO] Kylin:Common ....................................... SUCCESS [  7.231 s]

[INFO] Kylin:Metadata ..................................... SUCCESS [  1.428 s]

[INFO] Kylin:Dictionary ................................... SUCCESS [  1.559 s]

[INFO] Kylin:Cube ......................................... SUCCESS [  2.344 s]

[INFO] Kylin:InvertedIndex ................................ SUCCESS [  0.523 s]

[INFO] Kylin:Job .......................................... SUCCESS [  3.889 s]

[INFO] Kylin:Storage ...................................... SUCCESS [  2.018 s]

[INFO] Kylin:Query ........................................ SUCCESS [  1.278 s]

[INFO] Kylin:JDBC ......................................... SUCCESS [  1.901 s]

[INFO] Kylin:RESTServer ................................... SUCCESS [  8.819 s]

[INFO] Kylin:Monitor ...................................... SUCCESS [  1.038 s]

[INFO] ------------------------------------------------------------------------

[INFO] BUILD SUCCESS

[INFO] ------------------------------------------------------------------------

[INFO] Total time: 38.658 s

[INFO] Finished at: 2015-10-20T10:17:54-04:00

[INFO] Final Memory: 132M/2053M

[INFO] ------------------------------------------------------------------------

 

 

 Imported the sample cube and ran job.  It goes up to the 13th step and fails build Hbase tables - seems like a permission issue.


==> kylin.log <==

[pool-7-thread-10]:[2015-10-20 10:44:58,078][INFO][org.apache.kylin.job.tools.DeployCoprocessorCLI.deployCoprocessor(DeployCoprocessorCLI.java:99)] - hbase table [B@a092af6 deployed with coprocessor.

usage: CreateHTableJob

 -cubename <name>            Cube name. For exmaple, flat_item_cube

 -htablename <htable name>   HTable name

 -input <path>               Partition file path.

org.apache.hadoop.hbase.DoNotRetryIOException: org.apache.hadoop.hbase.DoNotRetryIOException: Mkdirs failed to create /tmp/hbase-hbase/local/jars/tmp (exists=false, cwd=file:/home/hbase) Set hbase.table.sanity.checks to false at conf or table descriptor if you want to bypass sanity checks

at org.apache.hadoop.hbase.master.HMaster.warnOrThrowExceptionForFailure(HMaster.java:1597)

at org.apache.hadoop.hbase.master.HMaster.sanityCheckTableDescriptor(HMaster.java:1529)

at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1448)

at org.apache.hadoop.hbase.master.MasterRpcServices.createTable(MasterRpcServices.java:422)

at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:48502)

at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)

at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)

at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)

at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)

at java.lang.Thread.run(Thread.java:745)


at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)

at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)

at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)

at java.lang.reflect.Constructor.newInstance(Constructor.java:422)

at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)

at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:95)

at org.apache.hadoop.hbase.client.RpcRetryingCaller.translateException(RpcRetryingCaller.java:226)

at org.apache.hadoop.hbase.client.RpcRetryingCaller.translateException(RpcRetryingCaller.java:240)

at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:140)

at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:3917)

at org.apache.hadoop.hbase.client.HBaseAdmin.createTableAsyncV2(HBaseAdmin.java:636)

at org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:557)

at org.apache.kylin.job.hadoop.hbase.CreateHTableJob.run(CreateHTableJob.java:150)

at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)

at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)

at org.apache.kylin.job.common.HadoopShellExecutable.doWork(HadoopShellExecutable.java:62)

at org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecutable.java:107)

at org.apache.kylin.job.execution.DefaultChainedExecutable.doWork(DefaultChainedExecutable.java:51)

at org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecutable.java:107)

at org.apache.kylin.job.impl.threadpool.DefaultScheduler$JobRunner.run(DefaultScheduler.java:130)

at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)

at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)

at java.lang.Thread.run(Thread.java:745)

Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.DoNotRetryIOException): org.apache.hadoop.hbase.DoNotRetryIOException: Mkdirs failed to create /tmp/hbase-hbase/local/jars/tmp (exists=false, cwd=file:/home/hbase) Set hbase.table.sanity.checks to false at conf or table descriptor if you want to bypass sanity checks

at org.apache.hadoop.hbase.master.HMaster.warnOrThrowExceptionForFailure(HMaster.java:1597)

at org.apache.hadoop.hbase.master.HMaster.sanityCheckTableDescriptor(HMaster.java:1529)

at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1448)

at org.apache.hadoop.hbase.master.MasterRpcServices.createTable(MasterRpcServices.java:422)

at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:48502)

at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)

at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)

at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)

at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)

at java.lang.Thread.run(Thread.java:745)


at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1226)

at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)

at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)

at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.createTable(MasterProtos.java:51086)

at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$4.createTable(ConnectionManager.java:1802)

at org.apache.hadoop.hbase.client.HBaseAdmin$4.call(HBaseAdmin.java:641)

at org.apache.hadoop.hbase.client.HBaseAdmin$4.call(HBaseAdmin.java:637)

at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:126)

... 14 more


==> kylin_job.log <==

[pool-7-thread-10]:[2015-10-20 10:44:58,113][ERROR][org.apache.kylin.job.hadoop.hbase.CreateHTableJob.run(CreateHTableJob.java:157)] - org.apache.hadoop.hbase.DoNotRetryIOException: Mkdirs failed to create /tmp/hbase-hbase/local/jars/tmp (exists=false, cwd=file:/home/hbase) Set hbase.table.sanity.checks to false at conf or table descriptor if you want to bypass sanity checks

at org.apache.hadoop.hbase.master.HMaster.warnOrThrowExceptionForFailure(HMaster.java:1597)

at org.apache.hadoop.hbase.master.HMaster.sanityCheckTableDescriptor(HMaster.java:1529)

at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1448)

at org.apache.hadoop.hbase.master.MasterRpcServices.createTable(MasterRpcServices.java:422)

at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:48502)

at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)

at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)

at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)

at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)

at java.lang.Thread.run(Thread.java:745)


org.apache.hadoop.hbase.DoNotRetryIOException: org.apache.hadoop.hbase.DoNotRetryIOException: Mkdirs failed to create /tmp/hbase-hbase/local/jars/tmp (exists=false, cwd=file:/home/hbase) Set hbase.table.sanity.checks to false at conf or table descriptor if you want to bypass sanity checks

at org.apache.hadoop.hbase.master.HMaster.warnOrThrowExceptionForFailure(HMaster.java:1597)

at org.apache.hadoop.hbase.master.HMaster.sanityCheckTableDescriptor(HMaster.java:1529)

at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1448)

at org.apache.hadoop.hbase.master.MasterRpcServices.createTable(MasterRpcServices.java:422)

at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:48502)

at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)

at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)

at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)

at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)

at java.lang.Thread.run(Thread.java:745)


at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)

at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)

at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)

at java.lang.reflect.Constructor.newInstance(Constructor.java:422)

at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)

at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:95)

at org.apache.hadoop.hbase.client.RpcRetryingCaller.translateException(RpcRetryingCaller.java:226)

at org.apache.hadoop.hbase.client.RpcRetryingCaller.translateException(RpcRetryingCaller.java:240)

at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:140)

at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:3917)

at org.apache.hadoop.hbase.client.HBaseAdmin.createTableAsyncV2(HBaseAdmin.java:636)

at org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:557)

at org.apache.kylin.job.hadoop.hbase.CreateHTableJob.run(CreateHTableJob.java:150)

at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)

at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)

at org.apache.kylin.job.common.HadoopShellExecutable.doWork(HadoopShellExecutable.java:62)

at org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecutable.java:107)

at org.apache.kylin.job.execution.DefaultChainedExecutable.doWork(DefaultChainedExecutable.java:51)

at org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecutable.java:107)

at org.apache.kylin.job.impl.threadpool.DefaultScheduler$JobRunner.run(DefaultScheduler.java:130)

at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)

at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)

at java.lang.Thread.run(Thread.java:745)

Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.DoNotRetryIOException): org.apache.hadoop.hbase.DoNotRetryIOException: Mkdirs failed to create /tmp/hbase-hbase/local/jars/tmp (exists=false, cwd=file:/home/hbase) Set hbase.table.sanity.checks to false at conf or table descriptor if you want to bypass sanity checks

at org.apache.hadoop.hbase.master.HMaster.warnOrThrowExceptionForFailure(HMaster.java:1597)

at org.apache.hadoop.hbase.master.HMaster.sanityCheckTableDescriptor(HMaster.java:1529)

at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1448)

at org.apache.hadoop.hbase.master.MasterRpcServices.createTable(MasterRpcServices.java:422)

at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:48502)

at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)

at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)

at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)

at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)

at java.lang.Thread.run(Thread.java:745)


at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1226)

at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)

at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)

at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.createTable(MasterProtos.java:51086)

at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$4.createTable(ConnectionManager.java:1802)

at org.apache.hadoop.hbase.client.HBaseAdmin$4.call(HBaseAdmin.java:641)

at org.apache.hadoop.hbase.client.HBaseAdmin$4.call(HBaseAdmin.java:637)

at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:126)

... 14 more


==> kylin.log <==

[pool-7-thread-10]:[2015-10-20 10:44:58,113][ERROR][org.apache.kylin.job.hadoop.hbase.CreateHTableJob.run(CreateHTableJob.java:157)] - org.apache.hadoop.hbase.DoNotRetryIOException: Mkdirs failed to create /tmp/hbase-hbase/local/jars/tmp (exists=false, cwd=file:/home/hbase) Set hbase.table.sanity.checks to false at conf or table descriptor if you want to bypass sanity checks

at org.apache.hadoop.hbase.master.HMaster.warnOrThrowExceptionForFailure(HMaster.java:1597)

at org.apache.hadoop.hbase.master.HMaster.sanityCheckTableDescriptor(HMaster.java:1529)

at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1448)

at org.apache.hadoop.hbase.master.MasterRpcServices.createTable(MasterRpcServices.java:422)

at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:48502)

at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)

at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)

at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)

at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)

at java.lang.Thread.run(Thread.java:745)


org.apache.hadoop.hbase.DoNotRetryIOException: org.apache.hadoop.hbase.DoNotRetryIOException: Mkdirs failed to create /tmp/hbase-hbase/local/jars/tmp (exists=false, cwd=file:/home/hbase) Set hbase.table.sanity.checks to false at conf or table descriptor if you want to bypass sanity checks

at org.apache.hadoop.hbase.master.HMaster.warnOrThrowExceptionForFailure(HMaster.java:1597)

at org.apache.hadoop.hbase.master.HMaster.sanityCheckTableDescriptor(HMaster.java:1529)

at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1448)

at org.apache.hadoop.hbase.master.MasterRpcServices.createTable(MasterRpcServices.java:422)

at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:48502)

at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)

at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)

at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)

at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)

at java.lang.Thread.run(Thread.java:745)


at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)

at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)

at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)

at java.lang.reflect.Constructor.newInstance(Constructor.java:422)

at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)

at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:95)

at org.apache.hadoop.hbase.client.RpcRetryingCaller.translateException(RpcRetryingCaller.java:226)

at org.apache.hadoop.hbase.client.RpcRetryingCaller.translateException(RpcRetryingCaller.java:240)

at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:140)

at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:3917)

at org.apache.hadoop.hbase.client.HBaseAdmin.createTableAsyncV2(HBaseAdmin.java:636)

at org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:557)

at org.apache.kylin.job.hadoop.hbase.CreateHTableJob.run(CreateHTableJob.java:150)

at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)

at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)

at org.apache.kylin.job.common.HadoopShellExecutable.doWork(HadoopShellExecutable.java:62)

at org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecutable.java:107)

at org.apache.kylin.job.execution.DefaultChainedExecutable.doWork(DefaultChainedExecutable.java:51)

at org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecutable.java:107)

at org.apache.kylin.job.impl.threadpool.DefaultScheduler$JobRunner.run(DefaultScheduler.java:130)

at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)

at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)

at java.lang.Thread.run(Thread.java:745)

Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.DoNotRetryIOException): org.apache.hadoop.hbase.DoNotRetryIOException: Mkdirs failed to create /tmp/hbase-hbase/local/jars/tmp (exists=false, cwd=file:/home/hbase) Set hbase.table.sanity.checks to false at conf or table descriptor if you want to bypass sanity checks

at org.apache.hadoop.hbase.master.HMaster.warnOrThrowExceptionForFailure(HMaster.java:1597)

at org.apache.hadoop.hbase.master.HMaster.sanityCheckTableDescriptor(HMaster.java:1529)

at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1448)

at org.apache.hadoop.hbase.master.MasterRpcServices.createTable(MasterRpcServices.java:422)

at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:48502)

at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)

at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)

at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)

at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)

at java.lang.Thread.run(Thread.java:745)


at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1226)

at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)

at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)

at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.createTable(MasterProtos.java:51086)

at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$4.createTable(ConnectionManager.java:1802)

at org.apache.hadoop.hbase.client.HBaseAdmin$4.call(HBaseAdmin.java:641)

at org.apache.hadoop.hbase.client.HBaseAdmin$4.call(HBaseAdmin.java:637)

at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:126)

... 14 more

2015-10-20 10:44:58,115 INFO  [pool-7-thread-10] client.ConnectionManager$HConnectionImplementation: Closing master protocol: MasterService

2015-10-20 10:44:58,115 INFO  [pool-7-thread-10] client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x15067463b2f001b

2015-10-20 10:44:58,145 INFO  [pool-7-thread-10] zookeeper.ZooKeeper: Session: 0x15067463b2f001b closed

2015-10-20 10:44:58,145 INFO  [pool-7-thread-10-EventThread] zookeeper.ClientCnxn: EventThread shut down

[pool-7-thread-10]:[2015-10-20 10:44:58,161][DEBUG][org.apache.kylin.common.persistence.ResourceStore.putResource(ResourceStore.java:195)] - Saving resource /execute_output/476fa1ea-25ea-4858-b780-51028c298274-12 (Store kylin_metadata@hbase)

[pool-7-thread-10]:[2015-10-20 10:44:58,172][DEBUG][org.apache.kylin.common.persistence.ResourceStore.putResource(ResourceStore.java:195)] - Saving resource /execute_output/476fa1ea-25ea-4858-b780-51028c298274-12 (Store kylin_metadata@hbase)


On Sat, Oct 17, 2015 at 11:13 AM, Luke Han [via Apache Kylin (Incubating)] <[hidden email]> wrote:
Hi Shailesh,
    If timing is concern, we strongly suggest to downgrade your HBase to
0.98 with Kylin. the 1.x branch is not fully tested yet.

    If you still would like to try with HBase 1.x, please clone this branch:
    https://github.com/apache/incubator-kylin/tree/1.x-HBase1.x

    And, then run ./script/package.sh to generate binary package
    Then copy package from dist folder and install with your Hadoop cluster.

     BTW, which distribution you are using now? CDH or HDP?

    Thanks.

Luke


Best Regards!
---------------------

Luke Han

On Sat, Oct 17, 2015 at 8:29 AM, sdangi <[hidden email]> wrote:

> Luke/Kylin Team -- Any further updates/guidance you could offer?  Latest
> clone does not work w/ 1.1 version of HBase.
>
> We are working on a time sensitive POC for a financial client and
> appreciate
> your responses.
>
> Thanks,
> Regards,
>
>
>
> --
> View this message in context:
> http://apache-kylin-incubating.74782.x6.nabble.com/SAMPLE-CUBE-FAILS-tp1936p1994.html
> Sent from the Apache Kylin (Incubating) mailing list archive at Nabble.com.
>



If you reply to this email, your message will be added to the discussion below:
http://apache-kylin-incubating.74782.x6.nabble.com/SAMPLE-CUBE-FAILS-tp1936p1996.html
To unsubscribe from SAMPLE CUBE FAILS, click here.
NAML

Reply | Threaded
Open this post in threaded view
|

Re: SAMPLE CUBE FAILS

sdangi
I'm now hitting a document issue. Trying to apply the fix suggested in the
JIRA

https://issues.apache.org/jira/browse/KYLIN-953

when cube job run at the "Convert Cuboid Data to HFile" step, throws an
error like bellow:
[pool-5-thread-8]:[2015-08-18 09:43:15,854][ERROR]
[org.apache.kylin.job.hadoop.cube.CubeHFileJob.run(CubeHFileJob.java:98)] -
error in CubeHFileJ
ob
java.lang.IllegalArgumentException: Can not create a Path from a null string
at org.apache.hadoop.fs.Path.checkPathArg(Path.java:123)

On Tue, Oct 20, 2015 at 10:49 AM, sdangi <[hidden email]> wrote:

> Hi Luke -- I will be looking into this later today.  But here is the
> progress (or lack there of), so far
>
> 1)   cd /home/worker1/kylin/1.x-HBase1.x
>
> 2)   [root@worker1 1.x-HBase1.x]# git clone -b 1.x-HBase1.x
> https://github.com/apache/incubator-kylin.git .
>
> [root@worker1 1.x-HBase1.x]# ls -ltr
>
> total 88
>
> -rw-r--r--  1 root root   849 Oct 20 10:13 README.md
>
> -rw-r--r--  1 root root   180 Oct 20 10:13 NOTICE
>
> -rw-r--r--  1 root root 12401 Oct 20 10:13 LICENSE
>
> -rw-r--r--  1 root root  7290 Oct 20 10:13 KEYS
>
> -rw-r--r--  1 root root   539 Oct 20 10:13 DISCLAIMER
>
> drwxr-xr-x  4 root root    46 Oct 20 10:13 atopcalcite
>
> drwxr-xr-x  4 root root    46 Oct 20 10:13 common
>
> drwxr-xr-x  2 root root  4096 Oct 20 10:13 bin
>
> drwxr-xr-x  2 root root    54 Oct 20 10:13 conf
>
> drwxr-xr-x  4 root root    46 Oct 20 10:13 cube
>
> drwxr-xr-x  4 root root    46 Oct 20 10:13 dictionary
>
> drwxr-xr-x  2 root root    23 Oct 20 10:13 deploy
>
> drwxr-xr-x  2 root root    22 Oct 20 10:13 docs
>
> drwxr-xr-x  4 root root    62 Oct 20 10:13 examples
>
> drwxr-xr-x  4 root root    46 Oct 20 10:13 invertedindex
>
> drwxr-xr-x  4 root root    46 Oct 20 10:13 jdbc
>
> drwxr-xr-x  4 root root    96 Oct 20 10:13 job
>
> drwxr-xr-x  4 root root    46 Oct 20 10:13 metadata
>
> drwxr-xr-x  4 root root    46 Oct 20 10:13 monitor
>
> drwxr-xr-x  4 root root    46 Oct 20 10:13 query
>
> -rw-r--r--  1 root root 39837 Oct 20 10:13 pom.xml
>
> drwxr-xr-x  2 root root    98 Oct 20 10:13 script
>
> drwxr-xr-x  4 root root    69 Oct 20 10:13 server
>
> drwxr-xr-x  3 root root    17 Oct 20 10:13 src
>
> drwxr-xr-x  4 root root    46 Oct 20 10:13 storage
>
> drwxr-xr-x  3 root root  4096 Oct 20 10:13 webapp
>
> drwxr-xr-x 16 root root  4096 Oct 20 10:13 website
>
>
>
>
>
>
>
> Build usingMaven
>
> INFO] --- maven-assembly-plugin:2.5.5:single (make-assembly) @
> kylin-monitor ---
>
> [WARNING] Artifact:
> org.apache.kylin:kylin-monitor:jar:1.1-incubating-SNAPSHOT references the
> same file as the assembly destination file. Moving it to a temporary
> location for inclusion.
>
> [INFO] Building jar:
>
> /home/worker1/kylin/1.x-HBase1.x/monitor/target/kylin-monitor-1.1-incubating-SNAPSHOT.jar
>
> [WARNING] Configuration options: 'appendAssemblyId' is set to false, and
> 'classifier' is missing.
>
> Instead of attaching the assembly file:
>
> /home/worker1/kylin/1.x-HBase1.x/monitor/target/kylin-monitor-1.1-incubating-SNAPSHOT.jar,
> it will become the file for main project artifact.
>
> NOTE: If multiple descriptors or descriptor-formats are provided for this
> project, the value of this file will be non-deterministic!
>
> [WARNING] Replacing pre-existing project main-artifact file:
>
> /home/worker1/kylin/1.x-HBase1.x/monitor/target/archive-tmp/kylin-monitor-1.1-incubating-SNAPSHOT.jar
>
> with assembly file:
>
> /home/worker1/kylin/1.x-HBase1.x/monitor/target/kylin-monitor-1.1-incubating-SNAPSHOT.jar
>
> [INFO]
>
> [INFO] --- maven-jar-plugin:2.4:test-jar (default) @ kylin-monitor ---
>
> [INFO] Building jar:
>
> /home/worker1/kylin/1.x-HBase1.x/monitor/target/kylin-monitor-1.1-incubating-SNAPSHOT-tests.jar
>
> [INFO]
> ------------------------------------------------------------------------
>
> [INFO] Reactor Summary:
>
> [INFO]
>
> [INFO] Kylin:HadoopOLAPEngine ............................. SUCCESS [
> 0.864 s]
>
> [INFO] Kylin:AtopCalcite .................................. SUCCESS [
> 5.439 s]
>
> [INFO] Kylin:Common ....................................... SUCCESS [
> 7.231 s]
>
> [INFO] Kylin:Metadata ..................................... SUCCESS [
> 1.428 s]
>
> [INFO] Kylin:Dictionary ................................... SUCCESS [
> 1.559 s]
>
> [INFO] Kylin:Cube ......................................... SUCCESS [
> 2.344 s]
>
> [INFO] Kylin:InvertedIndex ................................ SUCCESS [
> 0.523 s]
>
> [INFO] Kylin:Job .......................................... SUCCESS [
> 3.889 s]
>
> [INFO] Kylin:Storage ...................................... SUCCESS [
> 2.018 s]
>
> [INFO] Kylin:Query ........................................ SUCCESS [
> 1.278 s]
>
> [INFO] Kylin:JDBC ......................................... SUCCESS [
> 1.901 s]
>
> [INFO] Kylin:RESTServer ................................... SUCCESS [
> 8.819 s]
>
> [INFO] Kylin:Monitor ...................................... SUCCESS [
> 1.038 s]
>
> [INFO]
> ------------------------------------------------------------------------
>
> [INFO] BUILD SUCCESS
>
> [INFO]
> ------------------------------------------------------------------------
>
> [INFO] Total time: 38.658 s
>
> [INFO] Finished at: 2015-10-20T10:17:54-04:00
>
> [INFO] Final Memory: 132M/2053M
>
> [INFO]
> ------------------------------------------------------------------------
>
>
>
>
>
>  Imported the sample cube and ran job.  It goes up to the 13th step and
> fails build Hbase tables - seems like a permission issue.
>
>
> ==> kylin.log <==
>
> [pool-7-thread-10]:[2015-10-20
>
> 10:44:58,078][INFO][org.apache.kylin.job.tools.DeployCoprocessorCLI.deployCoprocessor(DeployCoprocessorCLI.java:99)]
> - hbase table [B@a092af6 deployed with coprocessor.
>
> usage: CreateHTableJob
>
>  -cubename <name>            Cube name. For exmaple, flat_item_cube
>
>  -htablename <htable name>   HTable name
>
>  -input <path>               Partition file path.
>
> org.apache.hadoop.hbase.DoNotRetryIOException:
> org.apache.hadoop.hbase.DoNotRetryIOException: Mkdirs failed to create
> /tmp/hbase-hbase/local/jars/tmp (exists=false, cwd=file:/home/hbase) Set
> hbase.table.sanity.checks to false at conf or table descriptor if you want
> to bypass sanity checks
>
> at
>
> org.apache.hadoop.hbase.master.HMaster.warnOrThrowExceptionForFailure(HMaster.java:1597)
>
> at
>
> org.apache.hadoop.hbase.master.HMaster.sanityCheckTableDescriptor(HMaster.java:1529)
>
> at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1448)
>
> at
>
> org.apache.hadoop.hbase.master.MasterRpcServices.createTable(MasterRpcServices.java:422)
>
> at
>
> org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:48502)
>
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)
>
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
>
> at
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
>
> at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
>
> at java.lang.Thread.run(Thread.java:745)
>
>
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>
> at
>
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>
> at
>
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>
> at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
>
> at
>
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
>
> at
>
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:95)
>
> at
>
> org.apache.hadoop.hbase.client.RpcRetryingCaller.translateException(RpcRetryingCaller.java:226)
>
> at
>
> org.apache.hadoop.hbase.client.RpcRetryingCaller.translateException(RpcRetryingCaller.java:240)
>
> at
>
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:140)
>
> at
>
> org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:3917)
>
> at
>
> org.apache.hadoop.hbase.client.HBaseAdmin.createTableAsyncV2(HBaseAdmin.java:636)
>
> at
> org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:557)
>
> at
>
> org.apache.kylin.job.hadoop.hbase.CreateHTableJob.run(CreateHTableJob.java:150)
>
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
>
> at
>
> org.apache.kylin.job.common.HadoopShellExecutable.doWork(HadoopShellExecutable.java:62)
>
> at
>
> org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecutable.java:107)
>
> at
>
> org.apache.kylin.job.execution.DefaultChainedExecutable.doWork(DefaultChainedExecutable.java:51)
>
> at
>
> org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecutable.java:107)
>
> at
>
> org.apache.kylin.job.impl.threadpool.DefaultScheduler$JobRunner.run(DefaultScheduler.java:130)
>
> at
>
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>
> at
>
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>
> at java.lang.Thread.run(Thread.java:745)
>
> Caused by:
>
> org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.DoNotRetryIOException):
> org.apache.hadoop.hbase.DoNotRetryIOException: Mkdirs failed to create
> /tmp/hbase-hbase/local/jars/tmp (exists=false, cwd=file:/home/hbase) Set
> hbase.table.sanity.checks to false at conf or table descriptor if you want
> to bypass sanity checks
>
> at
>
> org.apache.hadoop.hbase.master.HMaster.warnOrThrowExceptionForFailure(HMaster.java:1597)
>
> at
>
> org.apache.hadoop.hbase.master.HMaster.sanityCheckTableDescriptor(HMaster.java:1529)
>
> at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1448)
>
> at
>
> org.apache.hadoop.hbase.master.MasterRpcServices.createTable(MasterRpcServices.java:422)
>
> at
>
> org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:48502)
>
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)
>
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
>
> at
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
>
> at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
>
> at java.lang.Thread.run(Thread.java:745)
>
>
> at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1226)
>
> at
>
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
>
> at
>
> org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
>
> at
>
> org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.createTable(MasterProtos.java:51086)
>
> at
>
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$4.createTable(ConnectionManager.java:1802)
>
> at org.apache.hadoop.hbase.client.HBaseAdmin$4.call(HBaseAdmin.java:641)
>
> at org.apache.hadoop.hbase.client.HBaseAdmin$4.call(HBaseAdmin.java:637)
>
> at
>
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:126)
>
> ... 14 more
>
>
> ==> kylin_job.log <==
>
> [pool-7-thread-10]:[2015-10-20
>
> 10:44:58,113][ERROR][org.apache.kylin.job.hadoop.hbase.CreateHTableJob.run(CreateHTableJob.java:157)]
> - org.apache.hadoop.hbase.DoNotRetryIOException: Mkdirs failed to create
> /tmp/hbase-hbase/local/jars/tmp (exists=false, cwd=file:/home/hbase) Set
> hbase.table.sanity.checks to false at conf or table descriptor if you want
> to bypass sanity checks
>
> at
>
> org.apache.hadoop.hbase.master.HMaster.warnOrThrowExceptionForFailure(HMaster.java:1597)
>
> at
>
> org.apache.hadoop.hbase.master.HMaster.sanityCheckTableDescriptor(HMaster.java:1529)
>
> at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1448)
>
> at
>
> org.apache.hadoop.hbase.master.MasterRpcServices.createTable(MasterRpcServices.java:422)
>
> at
>
> org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:48502)
>
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)
>
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
>
> at
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
>
> at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
>
> at java.lang.Thread.run(Thread.java:745)
>
>
> org.apache.hadoop.hbase.DoNotRetryIOException:
> org.apache.hadoop.hbase.DoNotRetryIOException: Mkdirs failed to create
> /tmp/hbase-hbase/local/jars/tmp (exists=false, cwd=file:/home/hbase) Set
> hbase.table.sanity.checks to false at conf or table descriptor if you want
> to bypass sanity checks
>
> at
>
> org.apache.hadoop.hbase.master.HMaster.warnOrThrowExceptionForFailure(HMaster.java:1597)
>
> at
>
> org.apache.hadoop.hbase.master.HMaster.sanityCheckTableDescriptor(HMaster.java:1529)
>
> at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1448)
>
> at
>
> org.apache.hadoop.hbase.master.MasterRpcServices.createTable(MasterRpcServices.java:422)
>
> at
>
> org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:48502)
>
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)
>
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
>
> at
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
>
> at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
>
> at java.lang.Thread.run(Thread.java:745)
>
>
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>
> at
>
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>
> at
>
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>
> at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
>
> at
>
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
>
> at
>
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:95)
>
> at
>
> org.apache.hadoop.hbase.client.RpcRetryingCaller.translateException(RpcRetryingCaller.java:226)
>
> at
>
> org.apache.hadoop.hbase.client.RpcRetryingCaller.translateException(RpcRetryingCaller.java:240)
>
> at
>
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:140)
>
> at
>
> org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:3917)
>
> at
>
> org.apache.hadoop.hbase.client.HBaseAdmin.createTableAsyncV2(HBaseAdmin.java:636)
>
> at
> org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:557)
>
> at
>
> org.apache.kylin.job.hadoop.hbase.CreateHTableJob.run(CreateHTableJob.java:150)
>
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
>
> at
>
> org.apache.kylin.job.common.HadoopShellExecutable.doWork(HadoopShellExecutable.java:62)
>
> at
>
> org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecutable.java:107)
>
> at
>
> org.apache.kylin.job.execution.DefaultChainedExecutable.doWork(DefaultChainedExecutable.java:51)
>
> at
>
> org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecutable.java:107)
>
> at
>
> org.apache.kylin.job.impl.threadpool.DefaultScheduler$JobRunner.run(DefaultScheduler.java:130)
>
> at
>
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>
> at
>
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>
> at java.lang.Thread.run(Thread.java:745)
>
> Caused by:
>
> org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.DoNotRetryIOException):
> org.apache.hadoop.hbase.DoNotRetryIOException: Mkdirs failed to create
> /tmp/hbase-hbase/local/jars/tmp (exists=false, cwd=file:/home/hbase) Set
> hbase.table.sanity.checks to false at conf or table descriptor if you want
> to bypass sanity checks
>
> at
>
> org.apache.hadoop.hbase.master.HMaster.warnOrThrowExceptionForFailure(HMaster.java:1597)
>
> at
>
> org.apache.hadoop.hbase.master.HMaster.sanityCheckTableDescriptor(HMaster.java:1529)
>
> at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1448)
>
> at
>
> org.apache.hadoop.hbase.master.MasterRpcServices.createTable(MasterRpcServices.java:422)
>
> at
>
> org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:48502)
>
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)
>
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
>
> at
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
>
> at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
>
> at java.lang.Thread.run(Thread.java:745)
>
>
> at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1226)
>
> at
>
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
>
> at
>
> org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
>
> at
>
> org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.createTable(MasterProtos.java:51086)
>
> at
>
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$4.createTable(ConnectionManager.java:1802)
>
> at org.apache.hadoop.hbase.client.HBaseAdmin$4.call(HBaseAdmin.java:641)
>
> at org.apache.hadoop.hbase.client.HBaseAdmin$4.call(HBaseAdmin.java:637)
>
> at
>
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:126)
>
> ... 14 more
>
>
> ==> kylin.log <==
>
> [pool-7-thread-10]:[2015-10-20
>
> 10:44:58,113][ERROR][org.apache.kylin.job.hadoop.hbase.CreateHTableJob.run(CreateHTableJob.java:157)]
> - org.apache.hadoop.hbase.DoNotRetryIOException: Mkdirs failed to create
> /tmp/hbase-hbase/local/jars/tmp (exists=false, cwd=file:/home/hbase) Set
> hbase.table.sanity.checks to false at conf or table descriptor if you want
> to bypass sanity checks
>
> at
>
> org.apache.hadoop.hbase.master.HMaster.warnOrThrowExceptionForFailure(HMaster.java:1597)
>
> at
>
> org.apache.hadoop.hbase.master.HMaster.sanityCheckTableDescriptor(HMaster.java:1529)
>
> at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1448)
>
> at
>
> org.apache.hadoop.hbase.master.MasterRpcServices.createTable(MasterRpcServices.java:422)
>
> at
>
> org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:48502)
>
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)
>
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
>
> at
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
>
> at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
>
> at java.lang.Thread.run(Thread.java:745)
>
>
> org.apache.hadoop.hbase.DoNotRetryIOException:
> org.apache.hadoop.hbase.DoNotRetryIOException: Mkdirs failed to create
> /tmp/hbase-hbase/local/jars/tmp (exists=false, cwd=file:/home/hbase) Set
> hbase.table.sanity.checks to false at conf or table descriptor if you want
> to bypass sanity checks
>
> at
>
> org.apache.hadoop.hbase.master.HMaster.warnOrThrowExceptionForFailure(HMaster.java:1597)
>
> at
>
> org.apache.hadoop.hbase.master.HMaster.sanityCheckTableDescriptor(HMaster.java:1529)
>
> at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1448)
>
> at
>
> org.apache.hadoop.hbase.master.MasterRpcServices.createTable(MasterRpcServices.java:422)
>
> at
>
> org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:48502)
>
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)
>
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
>
> at
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
>
> at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
>
> at java.lang.Thread.run(Thread.java:745)
>
>
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>
> at
>
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>
> at
>
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>
> at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
>
> at
>
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
>
> at
>
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:95)
>
> at
>
> org.apache.hadoop.hbase.client.RpcRetryingCaller.translateException(RpcRetryingCaller.java:226)
>
> at
>
> org.apache.hadoop.hbase.client.RpcRetryingCaller.translateException(RpcRetryingCaller.java:240)
>
> at
>
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:140)
>
> at
>
> org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:3917)
>
> at
>
> org.apache.hadoop.hbase.client.HBaseAdmin.createTableAsyncV2(HBaseAdmin.java:636)
>
> at
> org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:557)
>
> at
>
> org.apache.kylin.job.hadoop.hbase.CreateHTableJob.run(CreateHTableJob.java:150)
>
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
>
> at
>
> org.apache.kylin.job.common.HadoopShellExecutable.doWork(HadoopShellExecutable.java:62)
>
> at
>
> org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecutable.java:107)
>
> at
>
> org.apache.kylin.job.execution.DefaultChainedExecutable.doWork(DefaultChainedExecutable.java:51)
>
> at
>
> org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecutable.java:107)
>
> at
>
> org.apache.kylin.job.impl.threadpool.DefaultScheduler$JobRunner.run(DefaultScheduler.java:130)
>
> at
>
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>
> at
>
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>
> at java.lang.Thread.run(Thread.java:745)
>
> Caused by:
>
> org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.DoNotRetryIOException):
> org.apache.hadoop.hbase.DoNotRetryIOException: Mkdirs failed to create
> /tmp/hbase-hbase/local/jars/tmp (exists=false, cwd=file:/home/hbase) Set
> hbase.table.sanity.checks to false at conf or table descriptor if you want
> to bypass sanity checks
>
> at
>
> org.apache.hadoop.hbase.master.HMaster.warnOrThrowExceptionForFailure(HMaster.java:1597)
>
> at
>
> org.apache.hadoop.hbase.master.HMaster.sanityCheckTableDescriptor(HMaster.java:1529)
>
> at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1448)
>
> at
>
> org.apache.hadoop.hbase.master.MasterRpcServices.createTable(MasterRpcServices.java:422)
>
> at
>
> org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:48502)
>
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)
>
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
>
> at
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
>
> at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
>
> at java.lang.Thread.run(Thread.java:745)
>
>
> at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1226)
>
> at
>
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
>
> at
>
> org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
>
> at
>
> org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.createTable(MasterProtos.java:51086)
>
> at
>
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$4.createTable(ConnectionManager.java:1802)
>
> at org.apache.hadoop.hbase.client.HBaseAdmin$4.call(HBaseAdmin.java:641)
>
> at org.apache.hadoop.hbase.client.HBaseAdmin$4.call(HBaseAdmin.java:637)
>
> at
>
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:126)
>
> ... 14 more
>
> 2015-10-20 10:44:58,115 INFO  [pool-7-thread-10]
> client.ConnectionManager$HConnectionImplementation: Closing master
> protocol: MasterService
>
> 2015-10-20 10:44:58,115 INFO  [pool-7-thread-10]
> client.ConnectionManager$HConnectionImplementation: Closing zookeeper
> sessionid=0x15067463b2f001b
>
> 2015-10-20 10:44:58,145 INFO  [pool-7-thread-10] zookeeper.ZooKeeper:
> Session: 0x15067463b2f001b closed
>
> 2015-10-20 10:44:58,145 INFO  [pool-7-thread-10-EventThread]
> zookeeper.ClientCnxn: EventThread shut down
>
> [pool-7-thread-10]:[2015-10-20
>
> 10:44:58,161][DEBUG][org.apache.kylin.common.persistence.ResourceStore.putResource(ResourceStore.java:195)]
> - Saving resource /execute_output/476fa1ea-25ea-4858-b780-51028c298274-12
> (Store kylin_metadata@hbase)
>
> [pool-7-thread-10]:[2015-10-20
>
> 10:44:58,172][DEBUG][org.apache.kylin.common.persistence.ResourceStore.putResource(ResourceStore.java:195)]
> - Saving resource /execute_output/476fa1ea-25ea-4858-b780-51028c298274-12
> (Store kylin_metadata@hbase)
>
> On Sat, Oct 17, 2015 at 11:13 AM, Luke Han [via Apache Kylin (Incubating)]
> <
> [hidden email]> wrote:
>
> > Hi Shailesh,
> >     If timing is concern, we strongly suggest to downgrade your HBase to
> > 0.98 with Kylin. the 1.x branch is not fully tested yet.
> >
> >     If you still would like to try with HBase 1.x, please clone this
> > branch:
> >     https://github.com/apache/incubator-kylin/tree/1.x-HBase1.x
> >
> >     And, then run ./script/package.sh to generate binary package
> >     Then copy package from dist folder and install with your Hadoop
> > cluster.
> >
> >      BTW, which distribution you are using now? CDH or HDP?
> >
> >     Thanks.
> >
> > Luke
> >
> >
> > Best Regards!
> > ---------------------
> >
> > Luke Han
> >
> > On Sat, Oct 17, 2015 at 8:29 AM, sdangi <[hidden email]
> > <http:///user/SendEmail.jtp?type=node&node=1996&i=0>> wrote:
> >
> > > Luke/Kylin Team -- Any further updates/guidance you could offer?
> Latest
> > > clone does not work w/ 1.1 version of HBase.
> > >
> > > We are working on a time sensitive POC for a financial client and
> > > appreciate
> > > your responses.
> > >
> > > Thanks,
> > > Regards,
> > >
> > >
> > >
> > > --
> > > View this message in context:
> > >
> >
> http://apache-kylin-incubating.74782.x6.nabble.com/SAMPLE-CUBE-FAILS-tp1936p1994.html
> > > Sent from the Apache Kylin (Incubating) mailing list archive at
> > Nabble.com.
> > >
> >
> >
> > ------------------------------
> > If you reply to this email, your message will be added to the discussion
> > below:
> >
> >
> http://apache-kylin-incubating.74782.x6.nabble.com/SAMPLE-CUBE-FAILS-tp1936p1996.html
> > To unsubscribe from SAMPLE CUBE FAILS, click here
> > <
> >
> > .
> > NAML
> > <
>
http://apache-kylin-incubating.74782.x6.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml
> >
> >
>
>
>
>
> --
> View this message in context:
> http://apache-kylin-incubating.74782.x6.nabble.com/SAMPLE-CUBE-FAILS-tp1936p2037.html
> Sent from the Apache Kylin (Incubating) mailing list archive at Nabble.com.
>
Reply | Threaded
Open this post in threaded view
|

Re: SAMPLE CUBE FAILS

Yang
To sum up what's in https://issues.apache.org/jira/browse/KYLIN-953. To
resolve the issue, you need one of the two hadoop config below in your site
xmls

- hadoop.tmp.dir    (for hbase 1.1.0 and before)
- hbase.fs.tmp.dir   (for hbase 1.1.1 and after)

On Wed, Oct 21, 2015 at 1:07 AM, Shailesh Dangi <[hidden email]> wrote:

> I'm now hitting a document issue. Trying to apply the fix suggested in the
> JIRA
>
> https://issues.apache.org/jira/browse/KYLIN-953
>
> when cube job run at the "Convert Cuboid Data to HFile" step, throws an
> error like bellow:
> [pool-5-thread-8]:[2015-08-18 09:43:15,854][ERROR]
> [org.apache.kylin.job.hadoop.cube.CubeHFileJob.run(CubeHFileJob.java:98)] -
> error in CubeHFileJ
> ob
> java.lang.IllegalArgumentException: Can not create a Path from a null
> string
> at org.apache.hadoop.fs.Path.checkPathArg(Path.java:123)
>
> On Tue, Oct 20, 2015 at 10:49 AM, sdangi <[hidden email]> wrote:
>
> > Hi Luke -- I will be looking into this later today.  But here is the
> > progress (or lack there of), so far
> >
> > 1)   cd /home/worker1/kylin/1.x-HBase1.x
> >
> > 2)   [root@worker1 1.x-HBase1.x]# git clone -b 1.x-HBase1.x
> > https://github.com/apache/incubator-kylin.git .
> >
> > [root@worker1 1.x-HBase1.x]# ls -ltr
> >
> > total 88
> >
> > -rw-r--r--  1 root root   849 Oct 20 10:13 README.md
> >
> > -rw-r--r--  1 root root   180 Oct 20 10:13 NOTICE
> >
> > -rw-r--r--  1 root root 12401 Oct 20 10:13 LICENSE
> >
> > -rw-r--r--  1 root root  7290 Oct 20 10:13 KEYS
> >
> > -rw-r--r--  1 root root   539 Oct 20 10:13 DISCLAIMER
> >
> > drwxr-xr-x  4 root root    46 Oct 20 10:13 atopcalcite
> >
> > drwxr-xr-x  4 root root    46 Oct 20 10:13 common
> >
> > drwxr-xr-x  2 root root  4096 Oct 20 10:13 bin
> >
> > drwxr-xr-x  2 root root    54 Oct 20 10:13 conf
> >
> > drwxr-xr-x  4 root root    46 Oct 20 10:13 cube
> >
> > drwxr-xr-x  4 root root    46 Oct 20 10:13 dictionary
> >
> > drwxr-xr-x  2 root root    23 Oct 20 10:13 deploy
> >
> > drwxr-xr-x  2 root root    22 Oct 20 10:13 docs
> >
> > drwxr-xr-x  4 root root    62 Oct 20 10:13 examples
> >
> > drwxr-xr-x  4 root root    46 Oct 20 10:13 invertedindex
> >
> > drwxr-xr-x  4 root root    46 Oct 20 10:13 jdbc
> >
> > drwxr-xr-x  4 root root    96 Oct 20 10:13 job
> >
> > drwxr-xr-x  4 root root    46 Oct 20 10:13 metadata
> >
> > drwxr-xr-x  4 root root    46 Oct 20 10:13 monitor
> >
> > drwxr-xr-x  4 root root    46 Oct 20 10:13 query
> >
> > -rw-r--r--  1 root root 39837 Oct 20 10:13 pom.xml
> >
> > drwxr-xr-x  2 root root    98 Oct 20 10:13 script
> >
> > drwxr-xr-x  4 root root    69 Oct 20 10:13 server
> >
> > drwxr-xr-x  3 root root    17 Oct 20 10:13 src
> >
> > drwxr-xr-x  4 root root    46 Oct 20 10:13 storage
> >
> > drwxr-xr-x  3 root root  4096 Oct 20 10:13 webapp
> >
> > drwxr-xr-x 16 root root  4096 Oct 20 10:13 website
> >
> >
> >
> >
> >
> >
> >
> > Build usingMaven
> >
> > INFO] --- maven-assembly-plugin:2.5.5:single (make-assembly) @
> > kylin-monitor ---
> >
> > [WARNING] Artifact:
> > org.apache.kylin:kylin-monitor:jar:1.1-incubating-SNAPSHOT references the
> > same file as the assembly destination file. Moving it to a temporary
> > location for inclusion.
> >
> > [INFO] Building jar:
> >
> >
> /home/worker1/kylin/1.x-HBase1.x/monitor/target/kylin-monitor-1.1-incubating-SNAPSHOT.jar
> >
> > [WARNING] Configuration options: 'appendAssemblyId' is set to false, and
> > 'classifier' is missing.
> >
> > Instead of attaching the assembly file:
> >
> >
> /home/worker1/kylin/1.x-HBase1.x/monitor/target/kylin-monitor-1.1-incubating-SNAPSHOT.jar,
> > it will become the file for main project artifact.
> >
> > NOTE: If multiple descriptors or descriptor-formats are provided for this
> > project, the value of this file will be non-deterministic!
> >
> > [WARNING] Replacing pre-existing project main-artifact file:
> >
> >
> /home/worker1/kylin/1.x-HBase1.x/monitor/target/archive-tmp/kylin-monitor-1.1-incubating-SNAPSHOT.jar
> >
> > with assembly file:
> >
> >
> /home/worker1/kylin/1.x-HBase1.x/monitor/target/kylin-monitor-1.1-incubating-SNAPSHOT.jar
> >
> > [INFO]
> >
> > [INFO] --- maven-jar-plugin:2.4:test-jar (default) @ kylin-monitor ---
> >
> > [INFO] Building jar:
> >
> >
> /home/worker1/kylin/1.x-HBase1.x/monitor/target/kylin-monitor-1.1-incubating-SNAPSHOT-tests.jar
> >
> > [INFO]
> > ------------------------------------------------------------------------
> >
> > [INFO] Reactor Summary:
> >
> > [INFO]
> >
> > [INFO] Kylin:HadoopOLAPEngine ............................. SUCCESS [
> > 0.864 s]
> >
> > [INFO] Kylin:AtopCalcite .................................. SUCCESS [
> > 5.439 s]
> >
> > [INFO] Kylin:Common ....................................... SUCCESS [
> > 7.231 s]
> >
> > [INFO] Kylin:Metadata ..................................... SUCCESS [
> > 1.428 s]
> >
> > [INFO] Kylin:Dictionary ................................... SUCCESS [
> > 1.559 s]
> >
> > [INFO] Kylin:Cube ......................................... SUCCESS [
> > 2.344 s]
> >
> > [INFO] Kylin:InvertedIndex ................................ SUCCESS [
> > 0.523 s]
> >
> > [INFO] Kylin:Job .......................................... SUCCESS [
> > 3.889 s]
> >
> > [INFO] Kylin:Storage ...................................... SUCCESS [
> > 2.018 s]
> >
> > [INFO] Kylin:Query ........................................ SUCCESS [
> > 1.278 s]
> >
> > [INFO] Kylin:JDBC ......................................... SUCCESS [
> > 1.901 s]
> >
> > [INFO] Kylin:RESTServer ................................... SUCCESS [
> > 8.819 s]
> >
> > [INFO] Kylin:Monitor ...................................... SUCCESS [
> > 1.038 s]
> >
> > [INFO]
> > ------------------------------------------------------------------------
> >
> > [INFO] BUILD SUCCESS
> >
> > [INFO]
> > ------------------------------------------------------------------------
> >
> > [INFO] Total time: 38.658 s
> >
> > [INFO] Finished at: 2015-10-20T10:17:54-04:00
> >
> > [INFO] Final Memory: 132M/2053M
> >
> > [INFO]
> > ------------------------------------------------------------------------
> >
> >
> >
> >
> >
> >  Imported the sample cube and ran job.  It goes up to the 13th step and
> > fails build Hbase tables - seems like a permission issue.
> >
> >
> > ==> kylin.log <==
> >
> > [pool-7-thread-10]:[2015-10-20
> >
> >
> 10:44:58,078][INFO][org.apache.kylin.job.tools.DeployCoprocessorCLI.deployCoprocessor(DeployCoprocessorCLI.java:99)]
> > - hbase table [B@a092af6 deployed with coprocessor.
> >
> > usage: CreateHTableJob
> >
> >  -cubename <name>            Cube name. For exmaple, flat_item_cube
> >
> >  -htablename <htable name>   HTable name
> >
> >  -input <path>               Partition file path.
> >
> > org.apache.hadoop.hbase.DoNotRetryIOException:
> > org.apache.hadoop.hbase.DoNotRetryIOException: Mkdirs failed to create
> > /tmp/hbase-hbase/local/jars/tmp (exists=false, cwd=file:/home/hbase) Set
> > hbase.table.sanity.checks to false at conf or table descriptor if you
> want
> > to bypass sanity checks
> >
> > at
> >
> >
> org.apache.hadoop.hbase.master.HMaster.warnOrThrowExceptionForFailure(HMaster.java:1597)
> >
> > at
> >
> >
> org.apache.hadoop.hbase.master.HMaster.sanityCheckTableDescriptor(HMaster.java:1529)
> >
> > at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1448)
> >
> > at
> >
> >
> org.apache.hadoop.hbase.master.MasterRpcServices.createTable(MasterRpcServices.java:422)
> >
> > at
> >
> >
> org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:48502)
> >
> > at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)
> >
> > at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
> >
> > at
> >
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
> >
> > at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
> >
> > at java.lang.Thread.run(Thread.java:745)
> >
> >
> > at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> >
> > at
> >
> >
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> >
> > at
> >
> >
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> >
> > at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
> >
> > at
> >
> >
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
> >
> > at
> >
> >
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:95)
> >
> > at
> >
> >
> org.apache.hadoop.hbase.client.RpcRetryingCaller.translateException(RpcRetryingCaller.java:226)
> >
> > at
> >
> >
> org.apache.hadoop.hbase.client.RpcRetryingCaller.translateException(RpcRetryingCaller.java:240)
> >
> > at
> >
> >
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:140)
> >
> > at
> >
> >
> org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:3917)
> >
> > at
> >
> >
> org.apache.hadoop.hbase.client.HBaseAdmin.createTableAsyncV2(HBaseAdmin.java:636)
> >
> > at
> >
> org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:557)
> >
> > at
> >
> >
> org.apache.kylin.job.hadoop.hbase.CreateHTableJob.run(CreateHTableJob.java:150)
> >
> > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> >
> > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
> >
> > at
> >
> >
> org.apache.kylin.job.common.HadoopShellExecutable.doWork(HadoopShellExecutable.java:62)
> >
> > at
> >
> >
> org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecutable.java:107)
> >
> > at
> >
> >
> org.apache.kylin.job.execution.DefaultChainedExecutable.doWork(DefaultChainedExecutable.java:51)
> >
> > at
> >
> >
> org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecutable.java:107)
> >
> > at
> >
> >
> org.apache.kylin.job.impl.threadpool.DefaultScheduler$JobRunner.run(DefaultScheduler.java:130)
> >
> > at
> >
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> >
> > at
> >
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> >
> > at java.lang.Thread.run(Thread.java:745)
> >
> > Caused by:
> >
> >
> org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.DoNotRetryIOException):
> > org.apache.hadoop.hbase.DoNotRetryIOException: Mkdirs failed to create
> > /tmp/hbase-hbase/local/jars/tmp (exists=false, cwd=file:/home/hbase) Set
> > hbase.table.sanity.checks to false at conf or table descriptor if you
> want
> > to bypass sanity checks
> >
> > at
> >
> >
> org.apache.hadoop.hbase.master.HMaster.warnOrThrowExceptionForFailure(HMaster.java:1597)
> >
> > at
> >
> >
> org.apache.hadoop.hbase.master.HMaster.sanityCheckTableDescriptor(HMaster.java:1529)
> >
> > at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1448)
> >
> > at
> >
> >
> org.apache.hadoop.hbase.master.MasterRpcServices.createTable(MasterRpcServices.java:422)
> >
> > at
> >
> >
> org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:48502)
> >
> > at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)
> >
> > at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
> >
> > at
> >
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
> >
> > at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
> >
> > at java.lang.Thread.run(Thread.java:745)
> >
> >
> > at
> org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1226)
> >
> > at
> >
> >
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
> >
> > at
> >
> >
> org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
> >
> > at
> >
> >
> org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.createTable(MasterProtos.java:51086)
> >
> > at
> >
> >
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$4.createTable(ConnectionManager.java:1802)
> >
> > at org.apache.hadoop.hbase.client.HBaseAdmin$4.call(HBaseAdmin.java:641)
> >
> > at org.apache.hadoop.hbase.client.HBaseAdmin$4.call(HBaseAdmin.java:637)
> >
> > at
> >
> >
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:126)
> >
> > ... 14 more
> >
> >
> > ==> kylin_job.log <==
> >
> > [pool-7-thread-10]:[2015-10-20
> >
> >
> 10:44:58,113][ERROR][org.apache.kylin.job.hadoop.hbase.CreateHTableJob.run(CreateHTableJob.java:157)]
> > - org.apache.hadoop.hbase.DoNotRetryIOException: Mkdirs failed to create
> > /tmp/hbase-hbase/local/jars/tmp (exists=false, cwd=file:/home/hbase) Set
> > hbase.table.sanity.checks to false at conf or table descriptor if you
> want
> > to bypass sanity checks
> >
> > at
> >
> >
> org.apache.hadoop.hbase.master.HMaster.warnOrThrowExceptionForFailure(HMaster.java:1597)
> >
> > at
> >
> >
> org.apache.hadoop.hbase.master.HMaster.sanityCheckTableDescriptor(HMaster.java:1529)
> >
> > at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1448)
> >
> > at
> >
> >
> org.apache.hadoop.hbase.master.MasterRpcServices.createTable(MasterRpcServices.java:422)
> >
> > at
> >
> >
> org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:48502)
> >
> > at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)
> >
> > at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
> >
> > at
> >
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
> >
> > at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
> >
> > at java.lang.Thread.run(Thread.java:745)
> >
> >
> > org.apache.hadoop.hbase.DoNotRetryIOException:
> > org.apache.hadoop.hbase.DoNotRetryIOException: Mkdirs failed to create
> > /tmp/hbase-hbase/local/jars/tmp (exists=false, cwd=file:/home/hbase) Set
> > hbase.table.sanity.checks to false at conf or table descriptor if you
> want
> > to bypass sanity checks
> >
> > at
> >
> >
> org.apache.hadoop.hbase.master.HMaster.warnOrThrowExceptionForFailure(HMaster.java:1597)
> >
> > at
> >
> >
> org.apache.hadoop.hbase.master.HMaster.sanityCheckTableDescriptor(HMaster.java:1529)
> >
> > at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1448)
> >
> > at
> >
> >
> org.apache.hadoop.hbase.master.MasterRpcServices.createTable(MasterRpcServices.java:422)
> >
> > at
> >
> >
> org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:48502)
> >
> > at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)
> >
> > at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
> >
> > at
> >
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
> >
> > at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
> >
> > at java.lang.Thread.run(Thread.java:745)
> >
> >
> > at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> >
> > at
> >
> >
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> >
> > at
> >
> >
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> >
> > at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
> >
> > at
> >
> >
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
> >
> > at
> >
> >
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:95)
> >
> > at
> >
> >
> org.apache.hadoop.hbase.client.RpcRetryingCaller.translateException(RpcRetryingCaller.java:226)
> >
> > at
> >
> >
> org.apache.hadoop.hbase.client.RpcRetryingCaller.translateException(RpcRetryingCaller.java:240)
> >
> > at
> >
> >
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:140)
> >
> > at
> >
> >
> org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:3917)
> >
> > at
> >
> >
> org.apache.hadoop.hbase.client.HBaseAdmin.createTableAsyncV2(HBaseAdmin.java:636)
> >
> > at
> >
> org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:557)
> >
> > at
> >
> >
> org.apache.kylin.job.hadoop.hbase.CreateHTableJob.run(CreateHTableJob.java:150)
> >
> > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> >
> > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
> >
> > at
> >
> >
> org.apache.kylin.job.common.HadoopShellExecutable.doWork(HadoopShellExecutable.java:62)
> >
> > at
> >
> >
> org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecutable.java:107)
> >
> > at
> >
> >
> org.apache.kylin.job.execution.DefaultChainedExecutable.doWork(DefaultChainedExecutable.java:51)
> >
> > at
> >
> >
> org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecutable.java:107)
> >
> > at
> >
> >
> org.apache.kylin.job.impl.threadpool.DefaultScheduler$JobRunner.run(DefaultScheduler.java:130)
> >
> > at
> >
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> >
> > at
> >
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> >
> > at java.lang.Thread.run(Thread.java:745)
> >
> > Caused by:
> >
> >
> org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.DoNotRetryIOException):
> > org.apache.hadoop.hbase.DoNotRetryIOException: Mkdirs failed to create
> > /tmp/hbase-hbase/local/jars/tmp (exists=false, cwd=file:/home/hbase) Set
> > hbase.table.sanity.checks to false at conf or table descriptor if you
> want
> > to bypass sanity checks
> >
> > at
> >
> >
> org.apache.hadoop.hbase.master.HMaster.warnOrThrowExceptionForFailure(HMaster.java:1597)
> >
> > at
> >
> >
> org.apache.hadoop.hbase.master.HMaster.sanityCheckTableDescriptor(HMaster.java:1529)
> >
> > at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1448)
> >
> > at
> >
> >
> org.apache.hadoop.hbase.master.MasterRpcServices.createTable(MasterRpcServices.java:422)
> >
> > at
> >
> >
> org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:48502)
> >
> > at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)
> >
> > at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
> >
> > at
> >
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
> >
> > at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
> >
> > at java.lang.Thread.run(Thread.java:745)
> >
> >
> > at
> org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1226)
> >
> > at
> >
> >
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
> >
> > at
> >
> >
> org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
> >
> > at
> >
> >
> org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.createTable(MasterProtos.java:51086)
> >
> > at
> >
> >
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$4.createTable(ConnectionManager.java:1802)
> >
> > at org.apache.hadoop.hbase.client.HBaseAdmin$4.call(HBaseAdmin.java:641)
> >
> > at org.apache.hadoop.hbase.client.HBaseAdmin$4.call(HBaseAdmin.java:637)
> >
> > at
> >
> >
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:126)
> >
> > ... 14 more
> >
> >
> > ==> kylin.log <==
> >
> > [pool-7-thread-10]:[2015-10-20
> >
> >
> 10:44:58,113][ERROR][org.apache.kylin.job.hadoop.hbase.CreateHTableJob.run(CreateHTableJob.java:157)]
> > - org.apache.hadoop.hbase.DoNotRetryIOException: Mkdirs failed to create
> > /tmp/hbase-hbase/local/jars/tmp (exists=false, cwd=file:/home/hbase) Set
> > hbase.table.sanity.checks to false at conf or table descriptor if you
> want
> > to bypass sanity checks
> >
> > at
> >
> >
> org.apache.hadoop.hbase.master.HMaster.warnOrThrowExceptionForFailure(HMaster.java:1597)
> >
> > at
> >
> >
> org.apache.hadoop.hbase.master.HMaster.sanityCheckTableDescriptor(HMaster.java:1529)
> >
> > at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1448)
> >
> > at
> >
> >
> org.apache.hadoop.hbase.master.MasterRpcServices.createTable(MasterRpcServices.java:422)
> >
> > at
> >
> >
> org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:48502)
> >
> > at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)
> >
> > at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
> >
> > at
> >
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
> >
> > at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
> >
> > at java.lang.Thread.run(Thread.java:745)
> >
> >
> > org.apache.hadoop.hbase.DoNotRetryIOException:
> > org.apache.hadoop.hbase.DoNotRetryIOException: Mkdirs failed to create
> > /tmp/hbase-hbase/local/jars/tmp (exists=false, cwd=file:/home/hbase) Set
> > hbase.table.sanity.checks to false at conf or table descriptor if you
> want
> > to bypass sanity checks
> >
> > at
> >
> >
> org.apache.hadoop.hbase.master.HMaster.warnOrThrowExceptionForFailure(HMaster.java:1597)
> >
> > at
> >
> >
> org.apache.hadoop.hbase.master.HMaster.sanityCheckTableDescriptor(HMaster.java:1529)
> >
> > at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1448)
> >
> > at
> >
> >
> org.apache.hadoop.hbase.master.MasterRpcServices.createTable(MasterRpcServices.java:422)
> >
> > at
> >
> >
> org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:48502)
> >
> > at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)
> >
> > at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
> >
> > at
> >
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
> >
> > at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
> >
> > at java.lang.Thread.run(Thread.java:745)
> >
> >
> > at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> >
> > at
> >
> >
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> >
> > at
> >
> >
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> >
> > at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
> >
> > at
> >
> >
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
> >
> > at
> >
> >
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:95)
> >
> > at
> >
> >
> org.apache.hadoop.hbase.client.RpcRetryingCaller.translateException(RpcRetryingCaller.java:226)
> >
> > at
> >
> >
> org.apache.hadoop.hbase.client.RpcRetryingCaller.translateException(RpcRetryingCaller.java:240)
> >
> > at
> >
> >
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:140)
> >
> > at
> >
> >
> org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:3917)
> >
> > at
> >
> >
> org.apache.hadoop.hbase.client.HBaseAdmin.createTableAsyncV2(HBaseAdmin.java:636)
> >
> > at
> >
> org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:557)
> >
> > at
> >
> >
> org.apache.kylin.job.hadoop.hbase.CreateHTableJob.run(CreateHTableJob.java:150)
> >
> > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> >
> > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
> >
> > at
> >
> >
> org.apache.kylin.job.common.HadoopShellExecutable.doWork(HadoopShellExecutable.java:62)
> >
> > at
> >
> >
> org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecutable.java:107)
> >
> > at
> >
> >
> org.apache.kylin.job.execution.DefaultChainedExecutable.doWork(DefaultChainedExecutable.java:51)
> >
> > at
> >
> >
> org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecutable.java:107)
> >
> > at
> >
> >
> org.apache.kylin.job.impl.threadpool.DefaultScheduler$JobRunner.run(DefaultScheduler.java:130)
> >
> > at
> >
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> >
> > at
> >
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> >
> > at java.lang.Thread.run(Thread.java:745)
> >
> > Caused by:
> >
> >
> org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.DoNotRetryIOException):
> > org.apache.hadoop.hbase.DoNotRetryIOException: Mkdirs failed to create
> > /tmp/hbase-hbase/local/jars/tmp (exists=false, cwd=file:/home/hbase) Set
> > hbase.table.sanity.checks to false at conf or table descriptor if you
> want
> > to bypass sanity checks
> >
> > at
> >
> >
> org.apache.hadoop.hbase.master.HMaster.warnOrThrowExceptionForFailure(HMaster.java:1597)
> >
> > at
> >
> >
> org.apache.hadoop.hbase.master.HMaster.sanityCheckTableDescriptor(HMaster.java:1529)
> >
> > at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1448)
> >
> > at
> >
> >
> org.apache.hadoop.hbase.master.MasterRpcServices.createTable(MasterRpcServices.java:422)
> >
> > at
> >
> >
> org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:48502)
> >
> > at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)
> >
> > at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
> >
> > at
> >
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
> >
> > at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
> >
> > at java.lang.Thread.run(Thread.java:745)
> >
> >
> > at
> org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1226)
> >
> > at
> >
> >
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
> >
> > at
> >
> >
> org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
> >
> > at
> >
> >
> org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.createTable(MasterProtos.java:51086)
> >
> > at
> >
> >
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$4.createTable(ConnectionManager.java:1802)
> >
> > at org.apache.hadoop.hbase.client.HBaseAdmin$4.call(HBaseAdmin.java:641)
> >
> > at org.apache.hadoop.hbase.client.HBaseAdmin$4.call(HBaseAdmin.java:637)
> >
> > at
> >
> >
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:126)
> >
> > ... 14 more
> >
> > 2015-10-20 10:44:58,115 INFO  [pool-7-thread-10]
> > client.ConnectionManager$HConnectionImplementation: Closing master
> > protocol: MasterService
> >
> > 2015-10-20 10:44:58,115 INFO  [pool-7-thread-10]
> > client.ConnectionManager$HConnectionImplementation: Closing zookeeper
> > sessionid=0x15067463b2f001b
> >
> > 2015-10-20 10:44:58,145 INFO  [pool-7-thread-10] zookeeper.ZooKeeper:
> > Session: 0x15067463b2f001b closed
> >
> > 2015-10-20 10:44:58,145 INFO  [pool-7-thread-10-EventThread]
> > zookeeper.ClientCnxn: EventThread shut down
> >
> > [pool-7-thread-10]:[2015-10-20
> >
> >
> 10:44:58,161][DEBUG][org.apache.kylin.common.persistence.ResourceStore.putResource(ResourceStore.java:195)]
> > - Saving resource /execute_output/476fa1ea-25ea-4858-b780-51028c298274-12
> > (Store kylin_metadata@hbase)
> >
> > [pool-7-thread-10]:[2015-10-20
> >
> >
> 10:44:58,172][DEBUG][org.apache.kylin.common.persistence.ResourceStore.putResource(ResourceStore.java:195)]
> > - Saving resource /execute_output/476fa1ea-25ea-4858-b780-51028c298274-12
> > (Store kylin_metadata@hbase)
> >
> > On Sat, Oct 17, 2015 at 11:13 AM, Luke Han [via Apache Kylin
> (Incubating)]
> > <
> > [hidden email]> wrote:
> >
> > > Hi Shailesh,
> > >     If timing is concern, we strongly suggest to downgrade your HBase
> to
> > > 0.98 with Kylin. the 1.x branch is not fully tested yet.
> > >
> > >     If you still would like to try with HBase 1.x, please clone this
> > > branch:
> > >     https://github.com/apache/incubator-kylin/tree/1.x-HBase1.x
> > >
> > >     And, then run ./script/package.sh to generate binary package
> > >     Then copy package from dist folder and install with your Hadoop
> > > cluster.
> > >
> > >      BTW, which distribution you are using now? CDH or HDP?
> > >
> > >     Thanks.
> > >
> > > Luke
> > >
> > >
> > > Best Regards!
> > > ---------------------
> > >
> > > Luke Han
> > >
> > > On Sat, Oct 17, 2015 at 8:29 AM, sdangi <[hidden email]
> > > <http:///user/SendEmail.jtp?type=node&node=1996&i=0>> wrote:
> > >
> > > > Luke/Kylin Team -- Any further updates/guidance you could offer?
> > Latest
> > > > clone does not work w/ 1.1 version of HBase.
> > > >
> > > > We are working on a time sensitive POC for a financial client and
> > > > appreciate
> > > > your responses.
> > > >
> > > > Thanks,
> > > > Regards,
> > > >
> > > >
> > > >
> > > > --
> > > > View this message in context:
> > > >
> > >
> >
> http://apache-kylin-incubating.74782.x6.nabble.com/SAMPLE-CUBE-FAILS-tp1936p1994.html
> > > > Sent from the Apache Kylin (Incubating) mailing list archive at
> > > Nabble.com.
> > > >
> > >
> > >
> > > ------------------------------
> > > If you reply to this email, your message will be added to the
> discussion
> > > below:
> > >
> > >
> >
> http://apache-kylin-incubating.74782.x6.nabble.com/SAMPLE-CUBE-FAILS-tp1936p1996.html
> > > To unsubscribe from SAMPLE CUBE FAILS, click here
> > > <
> >
> > >
> > > .
> > > NAML
> > > <
> >
>
http://apache-kylin-incubating.74782.x6.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml
> > >
> > >
> >
> >
> >
> >
> > --
> > View this message in context:
> >
> http://apache-kylin-incubating.74782.x6.nabble.com/SAMPLE-CUBE-FAILS-tp1936p2037.html
> > Sent from the Apache Kylin (Incubating) mailing list archive at
> Nabble.com.
> >
>
Reply | Threaded
Open this post in threaded view
|

Re: SAMPLE CUBE FAILS

Yang
I modified the 1.x-HBase1.x branch, such that the "hadoop.tmp.dir" and
"hbase.fs.tmp.dir"
are set to "/tmp" if missing. This show stopper should not happen again.

On Fri, Oct 23, 2015 at 1:18 PM, Li Yang <[hidden email]> wrote:

> To sum up what's in https://issues.apache.org/jira/browse/KYLIN-953. To
> resolve the issue, you need one of the two hadoop config below in your site
> xmls
>
> - hadoop.tmp.dir    (for hbase 1.1.0 and before)
> - hbase.fs.tmp.dir   (for hbase 1.1.1 and after)
>
> On Wed, Oct 21, 2015 at 1:07 AM, Shailesh Dangi <[hidden email]>
> wrote:
>
>> I'm now hitting a document issue. Trying to apply the fix suggested in the
>> JIRA
>>
>> https://issues.apache.org/jira/browse/KYLIN-953
>>
>> when cube job run at the "Convert Cuboid Data to HFile" step, throws an
>> error like bellow:
>> [pool-5-thread-8]:[2015-08-18 09:43:15,854][ERROR]
>> [org.apache.kylin.job.hadoop.cube.CubeHFileJob.run(CubeHFileJob.java:98)]
>> -
>> error in CubeHFileJ
>> ob
>> java.lang.IllegalArgumentException: Can not create a Path from a null
>> string
>> at org.apache.hadoop.fs.Path.checkPathArg(Path.java:123)
>>
>> On Tue, Oct 20, 2015 at 10:49 AM, sdangi <[hidden email]> wrote:
>>
>> > Hi Luke -- I will be looking into this later today.  But here is the
>> > progress (or lack there of), so far
>> >
>> > 1)   cd /home/worker1/kylin/1.x-HBase1.x
>> >
>> > 2)   [root@worker1 1.x-HBase1.x]# git clone -b 1.x-HBase1.x
>> > https://github.com/apache/incubator-kylin.git .
>> >
>> > [root@worker1 1.x-HBase1.x]# ls -ltr
>> >
>> > total 88
>> >
>> > -rw-r--r--  1 root root   849 Oct 20 10:13 README.md
>> >
>> > -rw-r--r--  1 root root   180 Oct 20 10:13 NOTICE
>> >
>> > -rw-r--r--  1 root root 12401 Oct 20 10:13 LICENSE
>> >
>> > -rw-r--r--  1 root root  7290 Oct 20 10:13 KEYS
>> >
>> > -rw-r--r--  1 root root   539 Oct 20 10:13 DISCLAIMER
>> >
>> > drwxr-xr-x  4 root root    46 Oct 20 10:13 atopcalcite
>> >
>> > drwxr-xr-x  4 root root    46 Oct 20 10:13 common
>> >
>> > drwxr-xr-x  2 root root  4096 Oct 20 10:13 bin
>> >
>> > drwxr-xr-x  2 root root    54 Oct 20 10:13 conf
>> >
>> > drwxr-xr-x  4 root root    46 Oct 20 10:13 cube
>> >
>> > drwxr-xr-x  4 root root    46 Oct 20 10:13 dictionary
>> >
>> > drwxr-xr-x  2 root root    23 Oct 20 10:13 deploy
>> >
>> > drwxr-xr-x  2 root root    22 Oct 20 10:13 docs
>> >
>> > drwxr-xr-x  4 root root    62 Oct 20 10:13 examples
>> >
>> > drwxr-xr-x  4 root root    46 Oct 20 10:13 invertedindex
>> >
>> > drwxr-xr-x  4 root root    46 Oct 20 10:13 jdbc
>> >
>> > drwxr-xr-x  4 root root    96 Oct 20 10:13 job
>> >
>> > drwxr-xr-x  4 root root    46 Oct 20 10:13 metadata
>> >
>> > drwxr-xr-x  4 root root    46 Oct 20 10:13 monitor
>> >
>> > drwxr-xr-x  4 root root    46 Oct 20 10:13 query
>> >
>> > -rw-r--r--  1 root root 39837 Oct 20 10:13 pom.xml
>> >
>> > drwxr-xr-x  2 root root    98 Oct 20 10:13 script
>> >
>> > drwxr-xr-x  4 root root    69 Oct 20 10:13 server
>> >
>> > drwxr-xr-x  3 root root    17 Oct 20 10:13 src
>> >
>> > drwxr-xr-x  4 root root    46 Oct 20 10:13 storage
>> >
>> > drwxr-xr-x  3 root root  4096 Oct 20 10:13 webapp
>> >
>> > drwxr-xr-x 16 root root  4096 Oct 20 10:13 website
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> > Build usingMaven
>> >
>> > INFO] --- maven-assembly-plugin:2.5.5:single (make-assembly) @
>> > kylin-monitor ---
>> >
>> > [WARNING] Artifact:
>> > org.apache.kylin:kylin-monitor:jar:1.1-incubating-SNAPSHOT references
>> the
>> > same file as the assembly destination file. Moving it to a temporary
>> > location for inclusion.
>> >
>> > [INFO] Building jar:
>> >
>> >
>> /home/worker1/kylin/1.x-HBase1.x/monitor/target/kylin-monitor-1.1-incubating-SNAPSHOT.jar
>> >
>> > [WARNING] Configuration options: 'appendAssemblyId' is set to false, and
>> > 'classifier' is missing.
>> >
>> > Instead of attaching the assembly file:
>> >
>> >
>> /home/worker1/kylin/1.x-HBase1.x/monitor/target/kylin-monitor-1.1-incubating-SNAPSHOT.jar,
>> > it will become the file for main project artifact.
>> >
>> > NOTE: If multiple descriptors or descriptor-formats are provided for
>> this
>> > project, the value of this file will be non-deterministic!
>> >
>> > [WARNING] Replacing pre-existing project main-artifact file:
>> >
>> >
>> /home/worker1/kylin/1.x-HBase1.x/monitor/target/archive-tmp/kylin-monitor-1.1-incubating-SNAPSHOT.jar
>> >
>> > with assembly file:
>> >
>> >
>> /home/worker1/kylin/1.x-HBase1.x/monitor/target/kylin-monitor-1.1-incubating-SNAPSHOT.jar
>> >
>> > [INFO]
>> >
>> > [INFO] --- maven-jar-plugin:2.4:test-jar (default) @ kylin-monitor ---
>> >
>> > [INFO] Building jar:
>> >
>> >
>> /home/worker1/kylin/1.x-HBase1.x/monitor/target/kylin-monitor-1.1-incubating-SNAPSHOT-tests.jar
>> >
>> > [INFO]
>> > ------------------------------------------------------------------------
>> >
>> > [INFO] Reactor Summary:
>> >
>> > [INFO]
>> >
>> > [INFO] Kylin:HadoopOLAPEngine ............................. SUCCESS [
>> > 0.864 s]
>> >
>> > [INFO] Kylin:AtopCalcite .................................. SUCCESS [
>> > 5.439 s]
>> >
>> > [INFO] Kylin:Common ....................................... SUCCESS [
>> > 7.231 s]
>> >
>> > [INFO] Kylin:Metadata ..................................... SUCCESS [
>> > 1.428 s]
>> >
>> > [INFO] Kylin:Dictionary ................................... SUCCESS [
>> > 1.559 s]
>> >
>> > [INFO] Kylin:Cube ......................................... SUCCESS [
>> > 2.344 s]
>> >
>> > [INFO] Kylin:InvertedIndex ................................ SUCCESS [
>> > 0.523 s]
>> >
>> > [INFO] Kylin:Job .......................................... SUCCESS [
>> > 3.889 s]
>> >
>> > [INFO] Kylin:Storage ...................................... SUCCESS [
>> > 2.018 s]
>> >
>> > [INFO] Kylin:Query ........................................ SUCCESS [
>> > 1.278 s]
>> >
>> > [INFO] Kylin:JDBC ......................................... SUCCESS [
>> > 1.901 s]
>> >
>> > [INFO] Kylin:RESTServer ................................... SUCCESS [
>> > 8.819 s]
>> >
>> > [INFO] Kylin:Monitor ...................................... SUCCESS [
>> > 1.038 s]
>> >
>> > [INFO]
>> > ------------------------------------------------------------------------
>> >
>> > [INFO] BUILD SUCCESS
>> >
>> > [INFO]
>> > ------------------------------------------------------------------------
>> >
>> > [INFO] Total time: 38.658 s
>> >
>> > [INFO] Finished at: 2015-10-20T10:17:54-04:00
>> >
>> > [INFO] Final Memory: 132M/2053M
>> >
>> > [INFO]
>> > ------------------------------------------------------------------------
>> >
>> >
>> >
>> >
>> >
>> >  Imported the sample cube and ran job.  It goes up to the 13th step and
>> > fails build Hbase tables - seems like a permission issue.
>> >
>> >
>> > ==> kylin.log <==
>> >
>> > [pool-7-thread-10]:[2015-10-20
>> >
>> >
>> 10:44:58,078][INFO][org.apache.kylin.job.tools.DeployCoprocessorCLI.deployCoprocessor(DeployCoprocessorCLI.java:99)]
>> > - hbase table [B@a092af6 deployed with coprocessor.
>> >
>> > usage: CreateHTableJob
>> >
>> >  -cubename <name>            Cube name. For exmaple, flat_item_cube
>> >
>> >  -htablename <htable name>   HTable name
>> >
>> >  -input <path>               Partition file path.
>> >
>> > org.apache.hadoop.hbase.DoNotRetryIOException:
>> > org.apache.hadoop.hbase.DoNotRetryIOException: Mkdirs failed to create
>> > /tmp/hbase-hbase/local/jars/tmp (exists=false, cwd=file:/home/hbase) Set
>> > hbase.table.sanity.checks to false at conf or table descriptor if you
>> want
>> > to bypass sanity checks
>> >
>> > at
>> >
>> >
>> org.apache.hadoop.hbase.master.HMaster.warnOrThrowExceptionForFailure(HMaster.java:1597)
>> >
>> > at
>> >
>> >
>> org.apache.hadoop.hbase.master.HMaster.sanityCheckTableDescriptor(HMaster.java:1529)
>> >
>> > at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1448)
>> >
>> > at
>> >
>> >
>> org.apache.hadoop.hbase.master.MasterRpcServices.createTable(MasterRpcServices.java:422)
>> >
>> > at
>> >
>> >
>> org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:48502)
>> >
>> > at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)
>> >
>> > at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
>> >
>> > at
>> >
>> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
>> >
>> > at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
>> >
>> > at java.lang.Thread.run(Thread.java:745)
>> >
>> >
>> > at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>> >
>> > at
>> >
>> >
>> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>> >
>> > at
>> >
>> >
>> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>> >
>> > at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
>> >
>> > at
>> >
>> >
>> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
>> >
>> > at
>> >
>> >
>> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:95)
>> >
>> > at
>> >
>> >
>> org.apache.hadoop.hbase.client.RpcRetryingCaller.translateException(RpcRetryingCaller.java:226)
>> >
>> > at
>> >
>> >
>> org.apache.hadoop.hbase.client.RpcRetryingCaller.translateException(RpcRetryingCaller.java:240)
>> >
>> > at
>> >
>> >
>> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:140)
>> >
>> > at
>> >
>> >
>> org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:3917)
>> >
>> > at
>> >
>> >
>> org.apache.hadoop.hbase.client.HBaseAdmin.createTableAsyncV2(HBaseAdmin.java:636)
>> >
>> > at
>> >
>> org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:557)
>> >
>> > at
>> >
>> >
>> org.apache.kylin.job.hadoop.hbase.CreateHTableJob.run(CreateHTableJob.java:150)
>> >
>> > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>> >
>> > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
>> >
>> > at
>> >
>> >
>> org.apache.kylin.job.common.HadoopShellExecutable.doWork(HadoopShellExecutable.java:62)
>> >
>> > at
>> >
>> >
>> org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecutable.java:107)
>> >
>> > at
>> >
>> >
>> org.apache.kylin.job.execution.DefaultChainedExecutable.doWork(DefaultChainedExecutable.java:51)
>> >
>> > at
>> >
>> >
>> org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecutable.java:107)
>> >
>> > at
>> >
>> >
>> org.apache.kylin.job.impl.threadpool.DefaultScheduler$JobRunner.run(DefaultScheduler.java:130)
>> >
>> > at
>> >
>> >
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>> >
>> > at
>> >
>> >
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>> >
>> > at java.lang.Thread.run(Thread.java:745)
>> >
>> > Caused by:
>> >
>> >
>> org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.DoNotRetryIOException):
>> > org.apache.hadoop.hbase.DoNotRetryIOException: Mkdirs failed to create
>> > /tmp/hbase-hbase/local/jars/tmp (exists=false, cwd=file:/home/hbase) Set
>> > hbase.table.sanity.checks to false at conf or table descriptor if you
>> want
>> > to bypass sanity checks
>> >
>> > at
>> >
>> >
>> org.apache.hadoop.hbase.master.HMaster.warnOrThrowExceptionForFailure(HMaster.java:1597)
>> >
>> > at
>> >
>> >
>> org.apache.hadoop.hbase.master.HMaster.sanityCheckTableDescriptor(HMaster.java:1529)
>> >
>> > at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1448)
>> >
>> > at
>> >
>> >
>> org.apache.hadoop.hbase.master.MasterRpcServices.createTable(MasterRpcServices.java:422)
>> >
>> > at
>> >
>> >
>> org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:48502)
>> >
>> > at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)
>> >
>> > at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
>> >
>> > at
>> >
>> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
>> >
>> > at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
>> >
>> > at java.lang.Thread.run(Thread.java:745)
>> >
>> >
>> > at
>> org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1226)
>> >
>> > at
>> >
>> >
>> org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
>> >
>> > at
>> >
>> >
>> org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
>> >
>> > at
>> >
>> >
>> org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.createTable(MasterProtos.java:51086)
>> >
>> > at
>> >
>> >
>> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$4.createTable(ConnectionManager.java:1802)
>> >
>> > at org.apache.hadoop.hbase.client.HBaseAdmin$4.call(HBaseAdmin.java:641)
>> >
>> > at org.apache.hadoop.hbase.client.HBaseAdmin$4.call(HBaseAdmin.java:637)
>> >
>> > at
>> >
>> >
>> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:126)
>> >
>> > ... 14 more
>> >
>> >
>> > ==> kylin_job.log <==
>> >
>> > [pool-7-thread-10]:[2015-10-20
>> >
>> >
>> 10:44:58,113][ERROR][org.apache.kylin.job.hadoop.hbase.CreateHTableJob.run(CreateHTableJob.java:157)]
>> > - org.apache.hadoop.hbase.DoNotRetryIOException: Mkdirs failed to create
>> > /tmp/hbase-hbase/local/jars/tmp (exists=false, cwd=file:/home/hbase) Set
>> > hbase.table.sanity.checks to false at conf or table descriptor if you
>> want
>> > to bypass sanity checks
>> >
>> > at
>> >
>> >
>> org.apache.hadoop.hbase.master.HMaster.warnOrThrowExceptionForFailure(HMaster.java:1597)
>> >
>> > at
>> >
>> >
>> org.apache.hadoop.hbase.master.HMaster.sanityCheckTableDescriptor(HMaster.java:1529)
>> >
>> > at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1448)
>> >
>> > at
>> >
>> >
>> org.apache.hadoop.hbase.master.MasterRpcServices.createTable(MasterRpcServices.java:422)
>> >
>> > at
>> >
>> >
>> org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:48502)
>> >
>> > at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)
>> >
>> > at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
>> >
>> > at
>> >
>> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
>> >
>> > at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
>> >
>> > at java.lang.Thread.run(Thread.java:745)
>> >
>> >
>> > org.apache.hadoop.hbase.DoNotRetryIOException:
>> > org.apache.hadoop.hbase.DoNotRetryIOException: Mkdirs failed to create
>> > /tmp/hbase-hbase/local/jars/tmp (exists=false, cwd=file:/home/hbase) Set
>> > hbase.table.sanity.checks to false at conf or table descriptor if you
>> want
>> > to bypass sanity checks
>> >
>> > at
>> >
>> >
>> org.apache.hadoop.hbase.master.HMaster.warnOrThrowExceptionForFailure(HMaster.java:1597)
>> >
>> > at
>> >
>> >
>> org.apache.hadoop.hbase.master.HMaster.sanityCheckTableDescriptor(HMaster.java:1529)
>> >
>> > at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1448)
>> >
>> > at
>> >
>> >
>> org.apache.hadoop.hbase.master.MasterRpcServices.createTable(MasterRpcServices.java:422)
>> >
>> > at
>> >
>> >
>> org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:48502)
>> >
>> > at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)
>> >
>> > at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
>> >
>> > at
>> >
>> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
>> >
>> > at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
>> >
>> > at java.lang.Thread.run(Thread.java:745)
>> >
>> >
>> > at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>> >
>> > at
>> >
>> >
>> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>> >
>> > at
>> >
>> >
>> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>> >
>> > at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
>> >
>> > at
>> >
>> >
>> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
>> >
>> > at
>> >
>> >
>> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:95)
>> >
>> > at
>> >
>> >
>> org.apache.hadoop.hbase.client.RpcRetryingCaller.translateException(RpcRetryingCaller.java:226)
>> >
>> > at
>> >
>> >
>> org.apache.hadoop.hbase.client.RpcRetryingCaller.translateException(RpcRetryingCaller.java:240)
>> >
>> > at
>> >
>> >
>> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:140)
>> >
>> > at
>> >
>> >
>> org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:3917)
>> >
>> > at
>> >
>> >
>> org.apache.hadoop.hbase.client.HBaseAdmin.createTableAsyncV2(HBaseAdmin.java:636)
>> >
>> > at
>> >
>> org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:557)
>> >
>> > at
>> >
>> >
>> org.apache.kylin.job.hadoop.hbase.CreateHTableJob.run(CreateHTableJob.java:150)
>> >
>> > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>> >
>> > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
>> >
>> > at
>> >
>> >
>> org.apache.kylin.job.common.HadoopShellExecutable.doWork(HadoopShellExecutable.java:62)
>> >
>> > at
>> >
>> >
>> org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecutable.java:107)
>> >
>> > at
>> >
>> >
>> org.apache.kylin.job.execution.DefaultChainedExecutable.doWork(DefaultChainedExecutable.java:51)
>> >
>> > at
>> >
>> >
>> org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecutable.java:107)
>> >
>> > at
>> >
>> >
>> org.apache.kylin.job.impl.threadpool.DefaultScheduler$JobRunner.run(DefaultScheduler.java:130)
>> >
>> > at
>> >
>> >
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>> >
>> > at
>> >
>> >
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>> >
>> > at java.lang.Thread.run(Thread.java:745)
>> >
>> > Caused by:
>> >
>> >
>> org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.DoNotRetryIOException):
>> > org.apache.hadoop.hbase.DoNotRetryIOException: Mkdirs failed to create
>> > /tmp/hbase-hbase/local/jars/tmp (exists=false, cwd=file:/home/hbase) Set
>> > hbase.table.sanity.checks to false at conf or table descriptor if you
>> want
>> > to bypass sanity checks
>> >
>> > at
>> >
>> >
>> org.apache.hadoop.hbase.master.HMaster.warnOrThrowExceptionForFailure(HMaster.java:1597)
>> >
>> > at
>> >
>> >
>> org.apache.hadoop.hbase.master.HMaster.sanityCheckTableDescriptor(HMaster.java:1529)
>> >
>> > at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1448)
>> >
>> > at
>> >
>> >
>> org.apache.hadoop.hbase.master.MasterRpcServices.createTable(MasterRpcServices.java:422)
>> >
>> > at
>> >
>> >
>> org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:48502)
>> >
>> > at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)
>> >
>> > at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
>> >
>> > at
>> >
>> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
>> >
>> > at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
>> >
>> > at java.lang.Thread.run(Thread.java:745)
>> >
>> >
>> > at
>> org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1226)
>> >
>> > at
>> >
>> >
>> org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
>> >
>> > at
>> >
>> >
>> org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
>> >
>> > at
>> >
>> >
>> org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.createTable(MasterProtos.java:51086)
>> >
>> > at
>> >
>> >
>> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$4.createTable(ConnectionManager.java:1802)
>> >
>> > at org.apache.hadoop.hbase.client.HBaseAdmin$4.call(HBaseAdmin.java:641)
>> >
>> > at org.apache.hadoop.hbase.client.HBaseAdmin$4.call(HBaseAdmin.java:637)
>> >
>> > at
>> >
>> >
>> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:126)
>> >
>> > ... 14 more
>> >
>> >
>> > ==> kylin.log <==
>> >
>> > [pool-7-thread-10]:[2015-10-20
>> >
>> >
>> 10:44:58,113][ERROR][org.apache.kylin.job.hadoop.hbase.CreateHTableJob.run(CreateHTableJob.java:157)]
>> > - org.apache.hadoop.hbase.DoNotRetryIOException: Mkdirs failed to create
>> > /tmp/hbase-hbase/local/jars/tmp (exists=false, cwd=file:/home/hbase) Set
>> > hbase.table.sanity.checks to false at conf or table descriptor if you
>> want
>> > to bypass sanity checks
>> >
>> > at
>> >
>> >
>> org.apache.hadoop.hbase.master.HMaster.warnOrThrowExceptionForFailure(HMaster.java:1597)
>> >
>> > at
>> >
>> >
>> org.apache.hadoop.hbase.master.HMaster.sanityCheckTableDescriptor(HMaster.java:1529)
>> >
>> > at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1448)
>> >
>> > at
>> >
>> >
>> org.apache.hadoop.hbase.master.MasterRpcServices.createTable(MasterRpcServices.java:422)
>> >
>> > at
>> >
>> >
>> org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:48502)
>> >
>> > at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)
>> >
>> > at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
>> >
>> > at
>> >
>> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
>> >
>> > at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
>> >
>> > at java.lang.Thread.run(Thread.java:745)
>> >
>> >
>> > org.apache.hadoop.hbase.DoNotRetryIOException:
>> > org.apache.hadoop.hbase.DoNotRetryIOException: Mkdirs failed to create
>> > /tmp/hbase-hbase/local/jars/tmp (exists=false, cwd=file:/home/hbase) Set
>> > hbase.table.sanity.checks to false at conf or table descriptor if you
>> want
>> > to bypass sanity checks
>> >
>> > at
>> >
>> >
>> org.apache.hadoop.hbase.master.HMaster.warnOrThrowExceptionForFailure(HMaster.java:1597)
>> >
>> > at
>> >
>> >
>> org.apache.hadoop.hbase.master.HMaster.sanityCheckTableDescriptor(HMaster.java:1529)
>> >
>> > at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1448)
>> >
>> > at
>> >
>> >
>> org.apache.hadoop.hbase.master.MasterRpcServices.createTable(MasterRpcServices.java:422)
>> >
>> > at
>> >
>> >
>> org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:48502)
>> >
>> > at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)
>> >
>> > at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
>> >
>> > at
>> >
>> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
>> >
>> > at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
>> >
>> > at java.lang.Thread.run(Thread.java:745)
>> >
>> >
>> > at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>> >
>> > at
>> >
>> >
>> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>> >
>> > at
>> >
>> >
>> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>> >
>> > at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
>> >
>> > at
>> >
>> >
>> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
>> >
>> > at
>> >
>> >
>> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:95)
>> >
>> > at
>> >
>> >
>> org.apache.hadoop.hbase.client.RpcRetryingCaller.translateException(RpcRetryingCaller.java:226)
>> >
>> > at
>> >
>> >
>> org.apache.hadoop.hbase.client.RpcRetryingCaller.translateException(RpcRetryingCaller.java:240)
>> >
>> > at
>> >
>> >
>> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:140)
>> >
>> > at
>> >
>> >
>> org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:3917)
>> >
>> > at
>> >
>> >
>> org.apache.hadoop.hbase.client.HBaseAdmin.createTableAsyncV2(HBaseAdmin.java:636)
>> >
>> > at
>> >
>> org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:557)
>> >
>> > at
>> >
>> >
>> org.apache.kylin.job.hadoop.hbase.CreateHTableJob.run(CreateHTableJob.java:150)
>> >
>> > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>> >
>> > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
>> >
>> > at
>> >
>> >
>> org.apache.kylin.job.common.HadoopShellExecutable.doWork(HadoopShellExecutable.java:62)
>> >
>> > at
>> >
>> >
>> org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecutable.java:107)
>> >
>> > at
>> >
>> >
>> org.apache.kylin.job.execution.DefaultChainedExecutable.doWork(DefaultChainedExecutable.java:51)
>> >
>> > at
>> >
>> >
>> org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecutable.java:107)
>> >
>> > at
>> >
>> >
>> org.apache.kylin.job.impl.threadpool.DefaultScheduler$JobRunner.run(DefaultScheduler.java:130)
>> >
>> > at
>> >
>> >
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>> >
>> > at
>> >
>> >
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>> >
>> > at java.lang.Thread.run(Thread.java:745)
>> >
>> > Caused by:
>> >
>> >
>> org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.DoNotRetryIOException):
>> > org.apache.hadoop.hbase.DoNotRetryIOException: Mkdirs failed to create
>> > /tmp/hbase-hbase/local/jars/tmp (exists=false, cwd=file:/home/hbase) Set
>> > hbase.table.sanity.checks to false at conf or table descriptor if you
>> want
>> > to bypass sanity checks
>> >
>> > at
>> >
>> >
>> org.apache.hadoop.hbase.master.HMaster.warnOrThrowExceptionForFailure(HMaster.java:1597)
>> >
>> > at
>> >
>> >
>> org.apache.hadoop.hbase.master.HMaster.sanityCheckTableDescriptor(HMaster.java:1529)
>> >
>> > at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1448)
>> >
>> > at
>> >
>> >
>> org.apache.hadoop.hbase.master.MasterRpcServices.createTable(MasterRpcServices.java:422)
>> >
>> > at
>> >
>> >
>> org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:48502)
>> >
>> > at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)
>> >
>> > at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
>> >
>> > at
>> >
>> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
>> >
>> > at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
>> >
>> > at java.lang.Thread.run(Thread.java:745)
>> >
>> >
>> > at
>> org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1226)
>> >
>> > at
>> >
>> >
>> org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
>> >
>> > at
>> >
>> >
>> org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
>> >
>> > at
>> >
>> >
>> org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.createTable(MasterProtos.java:51086)
>> >
>> > at
>> >
>> >
>> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$4.createTable(ConnectionManager.java:1802)
>> >
>> > at org.apache.hadoop.hbase.client.HBaseAdmin$4.call(HBaseAdmin.java:641)
>> >
>> > at org.apache.hadoop.hbase.client.HBaseAdmin$4.call(HBaseAdmin.java:637)
>> >
>> > at
>> >
>> >
>> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:126)
>> >
>> > ... 14 more
>> >
>> > 2015-10-20 10:44:58,115 INFO  [pool-7-thread-10]
>> > client.ConnectionManager$HConnectionImplementation: Closing master
>> > protocol: MasterService
>> >
>> > 2015-10-20 10:44:58,115 INFO  [pool-7-thread-10]
>> > client.ConnectionManager$HConnectionImplementation: Closing zookeeper
>> > sessionid=0x15067463b2f001b
>> >
>> > 2015-10-20 10:44:58,145 INFO  [pool-7-thread-10] zookeeper.ZooKeeper:
>> > Session: 0x15067463b2f001b closed
>> >
>> > 2015-10-20 10:44:58,145 INFO  [pool-7-thread-10-EventThread]
>> > zookeeper.ClientCnxn: EventThread shut down
>> >
>> > [pool-7-thread-10]:[2015-10-20
>> >
>> >
>> 10:44:58,161][DEBUG][org.apache.kylin.common.persistence.ResourceStore.putResource(ResourceStore.java:195)]
>> > - Saving resource
>> /execute_output/476fa1ea-25ea-4858-b780-51028c298274-12
>> > (Store kylin_metadata@hbase)
>> >
>> > [pool-7-thread-10]:[2015-10-20
>> >
>> >
>> 10:44:58,172][DEBUG][org.apache.kylin.common.persistence.ResourceStore.putResource(ResourceStore.java:195)]
>> > - Saving resource
>> /execute_output/476fa1ea-25ea-4858-b780-51028c298274-12
>> > (Store kylin_metadata@hbase)
>> >
>> > On Sat, Oct 17, 2015 at 11:13 AM, Luke Han [via Apache Kylin
>> (Incubating)]
>> > <
>> > [hidden email]> wrote:
>> >
>> > > Hi Shailesh,
>> > >     If timing is concern, we strongly suggest to downgrade your HBase
>> to
>> > > 0.98 with Kylin. the 1.x branch is not fully tested yet.
>> > >
>> > >     If you still would like to try with HBase 1.x, please clone this
>> > > branch:
>> > >     https://github.com/apache/incubator-kylin/tree/1.x-HBase1.x
>> > >
>> > >     And, then run ./script/package.sh to generate binary package
>> > >     Then copy package from dist folder and install with your Hadoop
>> > > cluster.
>> > >
>> > >      BTW, which distribution you are using now? CDH or HDP?
>> > >
>> > >     Thanks.
>> > >
>> > > Luke
>> > >
>> > >
>> > > Best Regards!
>> > > ---------------------
>> > >
>> > > Luke Han
>> > >
>> > > On Sat, Oct 17, 2015 at 8:29 AM, sdangi <[hidden email]
>> > > <http:///user/SendEmail.jtp?type=node&node=1996&i=0>> wrote:
>> > >
>> > > > Luke/Kylin Team -- Any further updates/guidance you could offer?
>> > Latest
>> > > > clone does not work w/ 1.1 version of HBase.
>> > > >
>> > > > We are working on a time sensitive POC for a financial client and
>> > > > appreciate
>> > > > your responses.
>> > > >
>> > > > Thanks,
>> > > > Regards,
>> > > >
>> > > >
>> > > >
>> > > > --
>> > > > View this message in context:
>> > > >
>> > >
>> >
>> http://apache-kylin-incubating.74782.x6.nabble.com/SAMPLE-CUBE-FAILS-tp1936p1994.html
>> > > > Sent from the Apache Kylin (Incubating) mailing list archive at
>> > > Nabble.com.
>> > > >
>> > >
>> > >
>> > > ------------------------------
>> > > If you reply to this email, your message will be added to the
>> discussion
>> > > below:
>> > >
>> > >
>> >
>> http://apache-kylin-incubating.74782.x6.nabble.com/SAMPLE-CUBE-FAILS-tp1936p1996.html
>> > > To unsubscribe from SAMPLE CUBE FAILS, click here
>> > > <
>> >
>> > >
>> > > .
>> > > NAML
>> > > <
>> >
>>
http://apache-kylin-incubating.74782.x6.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml
>> > >
>> > >
>> >
>> >
>> >
>> >
>> > --
>> > View this message in context:
>> >
>> http://apache-kylin-incubating.74782.x6.nabble.com/SAMPLE-CUBE-FAILS-tp1936p2037.html
>> > Sent from the Apache Kylin (Incubating) mailing list archive at
>> Nabble.com.
>> >
>>
>
>
12