Logo
User Name

Namik Hrle

Namik Hrle is IBM Fellow and a Vice President of Development in IBM’s Data and AI division. Namik is an international member of Academy of Sciences and Arts of Bosnia and Herzegovina.

Društvene mreže:

Namik Hrle is a Vice President of Development in IBM’s Data and AI division. He holds a title of IBM Fellow, the ultimate distinction of IBM technical career given to a small number of individuals who demonstrated the highest level of technical leadership, innovation, business impact, social eminence and talent building. Holder of 77 patents, numerous outstanding technical achievements, author recognition and corporate awards, Namik has a world-wide reputation as the ultimate expert in using data and AI for digital transformation and reinvention of enterprise applications.
Namik is also Director of IBM Data and AI Development lab in Boeblingen, Germany. The teams reporting to Namik spread multiple locations in Germany, China, Canada, India and the USA and develop various technologies from the Data and AI portfolio: AI application components, machine learning, core database technology, Z data and AI products, data science tools, SAP enablement, database administration tools, query acceleration, information lifecycle governance, RegTech/GRC solutions, ...
One of the most sought-after experts by IBM sales, marketing and technical support teams world-wide and a crucial player in numerous proofs of concept, benchmarks and executive briefings, he has been directly linked to many customers success stories.
He gave public speeches on hundreds of occasions, frequently delivering keynote presentations at major industry and university conferences on data and AI topics.

Namik is a member of Academy of Sciences and Arts of Bosnia and Herzegovina, ANUBIH, (International member in the department of Technical Sciences) and a member of the Bosnian-American Academy of Arts and Sciences (BHAAAS).

Business Analytics, Big Data, Systems of Engagement, IoT and Cloud delivery model create new requirements that profoundly affect database management systems technology. Columnar orientation, in-memory databases, no-SQL stores, Spark and Hadoop integration are the trends that have already proven their values in some of the most challenging application workloads. Hybrid systems promise converging of transactional and analytical processing and enable new way of driving insight from data, fueling new or significantly enhanced business models. At the same time, traditional, relational database management systems are still in foundations of a large majority of mission critical, core business applications. Where do traditional DBMS offerings fit within these new technology trends and business requirements? What is database providers’ strategy to remain relevant under the new conditions? These questions will be addressed in this keynote presentation which will discuss bringing together all data in all paradigms (transactional, analytical, unstructured, etc.) with the goal of “making data simple” to consume.

Ronald Barber, P. Bendel, Marco Czech, Oliver Draese, Frederick Ho, Namik Hrle, Stratos Idreos, Min-Soo Kim, Oliver Koeth et al.

The Blink project’s ambitious goal is to answer all Business Intelligence (BI) queries in mere seconds, regardless of the database size, with an extremely low total cost of ownership. Blink is a new DBMS aimed primarily at read-mostly BI query processing that exploits scale-out of commodity multi-core processors and cheap DRAM to retain a (copy of a) data mart completely in main memory. Additionally, it exploits proprietary compression technology and cache-conscious algorithms that reduce memory bandwidth consumption and allow most SQL query processing to be performed on the compressed data. Blink always scans (portions of) the data mart in parallel on all nodes, without using any indexes or materialized views, and without any query optimizer to choose among them. The Blink technology has thus far been incorporated into two IBM accelerator products generally available since March 2011. We are now working on the next generation of Blink, which will significantly expand the “sweet spot” of the Blink technology to much larger, disk-based warehouses and allow Blink to “own” the data, rather than copies of it.

Ronald Barber, P. Bendel, Marco Czech, Oliver Draese, Frederick Ho, Namik Hrle, Stratos Idreos, Min-Soo Kim, Oliver Koeth et al.

Namik Hrle, Oliver Draese

The IBM Smart Analytics Optimizer for DB2 for z/OS is a new technology to extend existing data warehouse environments on IBM mainframe systems. It is a workload optimized appliance that enables customers to analyze huge amounts of data in a matter of seconds instead of minutes or hours by delivering unmatched performance. This doesn't only allow “train-of-thought”analysis as interactive scenario but also enables business requests which were simply impossible before. Analytical workloads can now be executed as a online process instead of asynchronous batch processing. A call center employee can for example analyze the customer's behavior pattern while he still is on the phone. To achieve this new performance, the Smart Analytics Optimizer is implemented as a distributed, In –Memory system where a cluster of computing nodes holds the data in a specialized format in main memory structures. New technology enables the product to perform scans over compressed data without the need of decompression prior to applying predicates. A special partitioning scheme allows the parallel processing of the data with as few locking mechanisms as possible. As the industry trend is showing that an increase of single thread performance is no longer achievable but even standard computers are now delivered with multiple CPU cores, the Smart Analytics Optimizer is designed to exploit this new hardware as good as possible by assigning specific subsets of data to specific cores. The product by itself is running on a cluster where standard instances own hundreds of cores and terabytes of main memory. But even within a single computing core, the product makes use of SIMD instructions to perform parallel evaluation of predicates on multiple tuples. Besides the raw performance of this new product, the deep integration might even be considered more important. The Smart Analytics Optimizer is not a stand –alone product as it is offered by several other vendors. Instead it extends the existing relational database manager (DB2) by its functionality without requiring any changes to the existing application environments. Programs, which were connecting to DB2 before just continue to execute their workload against the mainframe database. The internal DB2 functionality then decides when to make use of the new Smart Analytics Optimizer or not. The granularity for these decision is a query block. This implies that a single query with multiple query blocks can be partially executed on the Smart Analytics Optimizer and partially on the mainframe directly. The joined results are returned back to the requesting application by DB2, hiding the complexity of the different execution environments and the required transformations.

P. Bendel, Oliver Draese, Namik Hrle, Tianchao Li

A computerized method for encoding or compressing a file, said data occurrences of data values ​​generated for coding the file, comprising the steps of: (A) dividing the file into a plurality of stacks; (B) determining the occurrence of data values ​​in a first data stack; (C) determining occurrence count information for at most a first number (M) the most frequent data values ​​in the data stack, the occurrence count information indicates the most common data values ​​and their occurrence numbers; (D) generating at least a first histogram with a second number (N) of intervals for the rest of the data values ​​in the data stack; (E) determining the occurrence of data values ​​in a further data stack; (F) determining occurrence count information most common for most a first number (M) data values ​​into the further data stack, wherein the occurrence count information indicates the most common data values ​​and their occurrence numbers; (G) generating at least one other histogram with a second number (N) of intervals for the rest of the data values ​​in the data stack; (H) combining the occurrence count information of the further data stack with the occurrence count information of the first processed data stack by respectively adding the occurrence numbers for elements with the same value; (I) merging the histogram of the further data stack with the histogram of the first processed data stack by adding the counts for occurrence histogram intervals with the same values; ...

Namik Hrle, A. Maier, James Teng, Julie Watts

Ronald Barber, Peter Bendel, Marco Czech, Oliver Draese, Frederick Ho, Namik Hrle, Stratos Idreos, Min-Soo Kim, Oliver Koeth et al.

...
...
...

Pretplatite se na novosti o BH Akademskom Imeniku

Ova stranica koristi kolačiće da bi vam pružila najbolje iskustvo

Saznaj više