Let’s take a look at Linear Algebra. Too hard. Goodbye.
Just joking! Hahaha! Of course there is no detour. You have to learn it. So today let’s review a few most basic matrix operations, in case I lost my memories.
Matrix addition is straightforward and stupid:
Subtraction works the same way.
Here’s where things begin to get funny (or annoying). However, according to 3b1b, this could be very much interpreted in a very intuitive way.
First, let’s take a look at a regular vector: . It is easy to find out that all of those regular vectors could be represented in a special way: , in which is perpendicular to the y-axis and its length is 1; and is perpendicular to the x-axis and its length is also 1. Now, image you scale the by a scalar of 2. Then you will find out according to the formula above, ’s x component has been scaled by a magnitude of 2! Isn’t that interesting?
The Linear Algebra multiplication mechanism is built around this: by morphing the basic vector of all vectors, we could morph the world indirectly (since those basic vectors could represent all vectors in the world coordinate). and here’s what we do during matrix multiplication.
Not hard to understand, huh? Because that’s the way it is intended to be. Normally, we would know that and , which translates the equation above to
Look at that! A unit vector multiplication. We all know what that means. However, what if we become a little bit creative and swap and ? You know that would mean changing the equation to
As the basic vectors got swapped, we will see that what’s the original component of becomes that of , and what’s the original component of becomes that of . That means the had been flipped to the other side of ! Badass! If this is a set of vectors, changing the basic vector would necessarily means transforming the whole world! And that’s why multiplcation of matrices works like this. Of course we can make the basic vectors parallel, such as . But that would mean the space has been squished onto the line where vector rests and the space will devolve into a one-dimension line, along with all vectors oringinally resides on that 2D plane. So fun!
Well, that very much concludes the matrix multiplication. We should not understand it in a way where your left finger is moving left and your right finger moving down; nope. Understand it in this way! For this, I really appreciate 3b1b.
Matrix inverse has always been brought forward by a very weird topic, which is solving multiple equations. Such as:
If one or more variable is lacking from one or more cases, just assume its coefficient is 0 (because it is). And then we can begin constructing it into our little matrix!
As you might notice the pattern already, the coefficients could be easily formed into a matrix:
And so does the variable, it could be formed into a vector just easily:
And the result, as well!
And now all of these becomes an easy matrix multiplication!
And we know that solve for exists if .
Oh yeah, sorry about that. stands for determinant, which in turn stands for the size ratio of the rectangle between scaled basic vector and the original basic vector.
However, as this is about matrix, I am not gonna dive deep here. If you are interested, go to here to learn more. Please, knock yourself out.
As I was just sayin’, solve for exists if and only if . So how could we solve ? With this newly-introduced concept of matrix inverse, of course!
As we all know, matrix could be transformed in this way:
So is there a way to reverse this transformation?
Turns out there are! And it is known by , which means . It is also not hard to infer that , since . It is thus could be inferred that . Thus is how a very important equation borned:
Applying to our equation above means:
… And that’s how its done. The definition of inverse matrix. Have a lot of fun!
There is still so much in the world of linear algebra! And as this thing is getting more and more gargantuan, I guess I am gonna stop right here. Ponder the wonder of algebra and eat an egg!