![]() Using a basic concept that leads to a simple formula for the power of a matrix, I show how it can solve time series, Markov chains, linear regression, linear recurrence equations, pseudo-inverse and square root of a matrix, data compression, principal components analysis (PCA) or dimension reduction, and other machine learning problems. This simple introduction to matrix theory offers a refreshing perspective on the subject. This article is unusually short despite the wide spectrum of topics covered: only 8 pages long. I explain why this is the case.Īuto-regressive models, classified based on the type of roots of the characteristic polynomial ![]() Statisticians have been using them for decades, without realizing that they are identical. I explain why Weibull and Fréchet distributions (used in extreme value theory) are one and the same.I found a way to make it numerically stable, and I explain in detail my related algorithm to solve the problem. I play with a powerful technique that can solve a large number of problems, but is rarely used, and certainly never in high dimensions, because it is extremely unstable.My article will help you understand these advanced stochastic models used by Wall Street, without giving you a headache. I explain why it is so smooth: it is - implicitly - an integrated Brownian motion, despite no integration being involved. You are not going to learn about it in college classes, or in textbooks. This may be the first time that a picture is produced, for this type of strange process. I could not find any other examples in the literature, after extensive Google and ArXiv searches. An application to time series (auto-regressive models) features an extremely smooth yet highly chaotic process (Brownian-related).I used “spectacular” in my title for the following reasons: In the end, it has more to do with calculus, than matrix algebra. It can solve countless problems, as discussed later in this article, with illustrations. The fundamental tool is the power of a matrix, and its byproduct, the characteristic polynomial. It covers a wide range of topics, while avoiding excessive use of jargon or advanced math. The material presented here, in a compact style, is rarely taught in college classes. This is not a traditional tutorial on linear algebra. This material is also discussed in details with Python code in chapter 3 in my book “Intuitive Machine Learning and Explainable AI”, available here.
0 Comments
Leave a Reply. |