• Professional career reflects 7 years of experience in IT industry and around 3 years of experience with Big Data technologies. • Having 3 years of extensive working experience in Big Data ecosystem and its various components such as HDFS, HIVE, Spark, Scala, Sqoop, Kafka • Experience in writing Spark programs for implementing transformations and actions using RDD, spark SQL. • Experience of working with different storage file formats like Text, Sequence, ORC Avro, Parquet. • Implemented Hadoop data Pipeline for financial reporting application • Hands-On experience with building stream-processing systems, using solutions such as Spark-Streaming and Flume • Good knowledge and understanding of developing workflow in Oozie to automate the tasks of loading the data into HDFS. • Hands-On experience in NoSQL (HBase, MongoDB), RDBMS (SQL Server, MySQL) and data warehousing. • Experience in Sqoop in migrating data from RDBMS to HDFS and Vice-Versa • Experience in writing complex SQL queries. • Experience in performance tuning and query optimization in SQL Server. • Experience in ETL (SSIS), Data warehousing (SSAS)
©