Hi, welcome on my blog about technical topics from machine learning to quantitative finance and cyber security.
Volatility Forecasting using GARCH
The ARCH model (Autoregressive Conditional Heteroskedasticity) was created by Robert Engle (1982). A time series is a temporally ordered sequence of events or data points whose characteristics are (necessarily) determined by the passage of time. Starting from a real-time series , each is a random variable. The discrete index describes the time points. A random…
Mastering Futures, Options, and Swaps:
Financial derivatives are powerful instruments used in hedging, speculation, and arbitrage. However, they remain a complex subject for many investors and businesses. This article provides a clear and practical explanation of Futures, Options, and Swaps, with real-world examples. Understanding Derivatives: A Brief Overview Derivatives are financial products whose value is derived from an underlying asset…
Objectives from Machine Learning
Classification (Supervised Learning) The goal of classification is to assign predefined labels to input data based on learned patterns. This task is commonly used in applications such as spam detection, medical diagnosis, and image recognition. Classification models are trained on labeled data, allowing them to categorize new, unseen data into specific classes. Algorithms like decision…
Environmental, Social and Governance (ESG)
ESG Risks in Corporate Focus Sustainability is more than just a trend—it is a necessity. Regulations, societal pressure, and economic developments demand that companies take ESG risks (Environmental, Social, Governance) seriously. But what does this mean in practice? Companies must not only report but also adapt their strategies and systematically manage risks to ensure long-term…
Efficient Market Hypothesis
The Market Efficiency Hypothesis: Is the Market Truly Efficient? The Market Efficiency Hypothesis (MEH) is one of the core concepts in financial science. It raises the question of whether and to what extent financial markets fully and immediately reflect available information in prices. This theory was developed in the 1960s by Eugene Fama, who distinguished…
Naive Bayes
Naive Bayes is a probabilistic classification algorithm that is based on Bayes‘ theorem and calculates the conditional probability. The assumption is made that the features are independent of each other – hence “naive”. Naive Bayes Algorithms targets the goal for a classification problem. That means, that the naive Bayes often is used in problems like…
Decision Trees
A central concept in building a decision tree is information gain, which is based on entropy. The entropy of a system is defined as: \begin{equation} H(S) = – \sum_{i=1}^{n} p_i \log_2(p_i) \end{equation} where is the probability of class . Higher entropy indicates greater uncertainty in class distribution. The information gain when splitting by attribute is…
Neural Networks
Neural networks are considered extremely flexible mathematical models whose structure is loosely inspired by the workings of biological nerve cells. In practice, such models consist of multiple layers of interconnected processing units. These units, often referred to as “neurons,” receive numerical inputs, multiply them by learnable parameters (weights), add shifts (biases), and then apply a…
Main Types of Machine Learning
Machine learning is part of artificial intelligence (AI) that focuses on using mathematical operation to learn from data and make predictions or decisions nearly autonomously. There are three main types of machine learning: In supervised learning, the model is trained on labeled data, meaning that each input is paired with its corresponding correct output. The…