We’ll learn much more about matrices in Linear Algebra. For now, we just need a brief introduction to matrices (for some, this may be a review from Precalculus), since we’ll be using them extensively to solve systems of differential equations.
Read MoreThe transpose of a matrix is simply the matrix you get when you swap all the rows and columns. In other words, the first row becomes the first column, the second row becomes the second column, and the nth row becomes the nth column. The determinant of a transpose of a square matrix will always be equal to the determinant of the original matrix.
Read MoreCramer’s Rule is a simple rule that lets us use determinants to solve a system of equations. It tells us that we can solve for any variable in the system by calculating D_v/D, where D_v is the determinant of the coefficient matrix, with the answer column values substituted into the column representing the variable for which we’re trying to solve, and where D is the determinant of the coefficient matrix.
Read MoreNow that we know how to use row operations to manipulate matrices, we can use them to simplify a matrix in order to solve the system of linear equations the matrix represents. Our goal will be to use these row operations to change the matrix into either row-echelon form, or reduced row-echelon form.
Read MoreYou know already how to solve systems of linear equations using substitution, elimination, and graphing. This time, we want to talk about how to solve systems using inverse matrices.
Read MoreSo we can simply calculate the determinant, and then, if the determinant is 0, the matrix is not invertible, so you can’t find its inverse, but if the determinant is nonzero, the matrix is invertible, so you can find its inverse.
Read More