We help people get the most out of their investment in Oracle Data, Analytics and AI.

Big Data

Creating Real-Time Search Dashboards using Apache Solr, Hue, Flume and Cloudera Morphlines

Late last week Cloudera published a blog post on their developer site on building a real-time log analytics dashboard using Apache Kafka, Cloudera Search and Hue [http://blog.cloudera.com/blog/2015/02/how-to-do-real-time-log-analytics-with-apache-kafka-cloudera-search-and-hue/] . As I’d recently been playing around with Oracle Big Data Discovery with our website log

Technical

Introducing Oracle Big Data Discovery Part 2: Data Transformation, Wrangling and Exploration

In yesterday’s post I looked at Oracle Big Data Discovery and how it brought the search and analytic capabilities of Endeca to Hadoop [https://www.rittmanmead.com/blog/2015/02/introducing-oracle-big-data-discovery-part-1-the-visual-face-of-hadoop/] . We looked at how the Oracle Endeca Information Discovery Studio application works with a version of the Endeca

Technical

Connecting OBIEE11g on Windows to a Kerberos-Secured CDH5 Hadoop Cluster using Cloudera HiveServer2 ODBC Drivers

In a few previous posts and magazine articles [http://www.oracle.com/technetwork/issue-archive/2014/14-sep/o54ba-2279189.html] I’ve covered connecting OBIEE11g to a Hadoop cluster [https://www.rittmanmead.com/blog/2014/01/obiee-11-1-1-7-cloudera-hadoop-hiveimpala-part-2-load-data-into-hivehcatalog-analyze-using-impala/] , using OBIEE 11.1.1.7 and Cloudera CDH4 and CDH5 as the examples. Things

Technical

OBIEE and ODI on Hadoop : Next-Generation Initiatives To Improve Hive Performance

The other week I posted a three-part series (part 1 [https://www.rittmanmead.com/blog/2014/12/going-beyond-mapreduce-for-hadoop-etl-pt-1-why-mapreduce-is-only-for-batch-processing/] , part 2 [https://www.rittmanmead.com/blog/2014/12/going-beyond-mapreduce-for-hadoop-etl-pt-2-introducing-apache-yarn-and-apache-tez/] and part 3 [https://www.rittmanmead.com/blog/2014/12/going-beyond-mapreduce-for-hadoop-etl-pt-3-introducing-apache-spark/] ) on going beyond MapReduce for Hadoop-based ETL, where I l