Things Solver, Serbia
He has a strong technical background in Data Engineering – ranging from Data Collection, Cleansing, Data Preparation and Analytics, and serving the data to Data Scientists so they can easily query it. On daily basis he is utilizing Big Data technologies such as Apache Spark, Hadoop, Hive, Elasticsearch, Python, PostgreSQL, Airflow…
In professional career, he worked on many production scale projects in Telco industry, Finance and Retail.
Geospatial Analytics at Scale
Data’s spatial context can be a very important variable in many applications. Massive volumes of spatial data are generated on daily basis – from cell phone usage, commuting from home to work, by taxi services, airplanes, drones, various sensors and logs, etc. Geospatial data provides us very important insights into customer behavior and various movement trends , which can be an important information in decision making and for various optimizations. In order to benefit from geospatial context in our applications, we need to be able to efficiently parse geospatial datasets at scale, and use them together with other available data sources and information. There is a limited number of open source tools that provide an efficient way to parse and query geospatial data, which makes utilization of geospatial information in Business Intelligence and Predictive Modelling quite a challenge. The main focus of my talk will be utilizing geospatial data at scale based on Apache Spark and Magellan library – an approach that we are using within our applications.
Date: October 11, 2018