AlixPartners - Southfield, MI
posted 5 months ago
At AlixPartners, we tackle the most complex and critical challenges by swiftly transitioning from analysis to action when it truly matters. Our goal is to create value that has a lasting impact on companies, their people, and the communities they serve. We highly value diversity and inclusion, and we seek individuals who are intellectually curious, inventive, and forward-thinking. We invite you to influence our work and help define how we embrace the future. By understanding, respecting, and honoring the needs of our employees, clients, and communities, AlixPartners actively promotes an inclusive environment. We firmly believe in the value that diversity brings to our experiences and are committed to continuously enhancing our initiatives, policies, and practices. We hold ourselves accountable by providing a space for authenticity, growth, and equity for everyone. In this role, you will work closely with owners, boards, and CEOs to address the pressing issues that are often at the forefront of their agendas. You will engage with financially secure, under-performing, and distressed companies across a variety of urgent, high-impact situations. Our elite professionals are recognized experts in their respective fields, leveraging their skills and experience to craft measurable, improved outcomes for our clients. You will have the opportunity to create ETL workflows, scripts, statistical models, and visualizations while taking responsibility for the design, build, test, execution, and support of data migration, cleansing, and wrangling processes. The ideal candidate will possess a detailed understanding of the underlying data and data structures of multiple systems, enabling in-depth analysis of existing and potential data insights. Your responsibilities will include data modeling, selecting features, building and optimizing classifiers using machine learning techniques, and executing machine learning projects using state-of-the-art methods. You will also extend the company's data with third-party sources of information when necessary, create automated anomaly detection systems, and continuously track their performance. Experience with common data science toolkits such as Python, PySpark, and R is essential, with excellence in at least one of these being highly desirable. A strong understanding of machine learning techniques and algorithms, such as k-NN, Naive Bayes, SVM, and Decision Forests, is also required. You will collect data from a wide variety of corporate databases, including various SQL databases (Microsoft, Redshift, Teradata, Oracle, Netezza), Access, Excel, plain or formatted text files, OLAP cubes (Microsoft, Oracle), and no-SQL databases. Additionally, you will parse data from poorly structured XML and invalid HTML documents, use regular expressions to extract information from unstructured text documents, and deal with missing data through multiple imputation or advanced models. Automating repetitive tasks with scripts will also be part of your role. You will build effective, reliable, and robust ETL processes that govern the data ingestion flow, design database models with consistent table structures, and apply advanced dimensional schemas that uphold data quality and consistency standards. Familiarity with cloud architectures, particularly Azure, AWS, or GCP, is desired. You will demonstrate advanced SQL skills, including CTEs and window functions, to work with extensive amounts of data at various aggregation levels. Reviewing and analyzing legacy code/scripts to understand data processing logic and business rules will be essential, as will the ability to apply statistical learning languages to build predictive models.