While we usually use functions to map coordinate points, if we’re going to map vectors from one space to another, we usually switch over from the language of “functions,” to “transformations.” In other words, even though functions and transformations perform the same kind of mapping operation, if we want to map vectors, we should really say that the mapping is done by a transformation instead of by a function.
Read MoreIn the previous lesson, we looked at an example of a linear transformation that included a reflection and a stretch. We can apply the same process for other kinds of transformations, like compressions, or for rotations. But we can also use a linear transformation to rotate a vector by a certain angle, either in degrees or in radians.
Read MoreThe transpose of a matrix is simply the matrix you get when you swap all the rows and columns. In other words, the first row becomes the first column, the second row becomes the second column, and the nth row becomes the nth column. The determinant of a transpose of a square matrix will always be equal to the determinant of the original matrix.
Read MoreThe span of a set of vectors is the collection of all vectors which can be represented by some linear combination of the set. That sounds confusing, but let’s think back to the basis vectors i=(1,0) and j=(0,1) in R^2. If you choose absolutely any vector, anywhere in R^2, you can get to that vector using a linear combination of i and j. If I choose (13,2), I can get to it with the linear combination a=13i+2j, or if I choose (-1,-7), I can get to it with the linear combination a=-i-7j. There’s no vector you can find in R^2 that you can’t reach with a linear combination of i and j.
Read MoreNow that we know how to use row operations to manipulate matrices, we can use them to simplify a matrix in order to solve the system of linear equations the matrix represents. Our goal will be to use these row operations to change the matrix into either row-echelon form, or reduced row-echelon form.
Read MoreWe know how to find the null space of a matrix as the full set of vectors x that satisfy Ax=O. But now we want to be able to solve the more general equation Ax=b. In other words, we want to be able to solve this equation when that vector on the right side is some non-zero b, instead of being limited to solving the equation only when b=O.
Read MoreSo we can simply calculate the determinant, and then, if the determinant is 0, the matrix is not invertible, so you can’t find its inverse, but if the determinant is nonzero, the matrix is invertible, so you can find its inverse.
Read MoreWe can conclude that every span is a subspace. Remember that the span of a vector set is all the linear combinations of that set. The span of any set of vectors is always a valid subspace.
Read MoreA subspace (or linear subspace) of R^2 is a set of two-dimensional vectors within R^2, where the set meets three specific conditions: 1) The set includes the zero vector, 2) The set is closed under scalar multiplication, and 3) The set is closed under addition.
Read MoreAny vector with a magnitude of 1 is called a unit vector, u. In general, a unit vector doesn’t have to point in a particular direction. As long as the vector is one unit long, it’s a unit vector. But oftentimes we’re interested in changing a particular vector v (with a length other than 1), into an associated unit vector. In that case, that unit vector needs to point in the same direction as v.
Read MoreIn this lesson we’ll look at how to solve systems of three linear equations in three variables. If a system of three linear equations has solutions, each solution will consist of one value for each variable. If the three equations in such a linear system are “independent of one another,” the system will have either one solution or no solutions. All the systems of three linear equations that you’ll encounter in this lesson have at most one solution.
Read MoreNow that we understand what the determinant is and how to calculate it, we want to look at other properties of determinants so that we can do more with them.
Read MoreLet’s remember the relationship between perpendicularity and orthogonality. We usually use the word “perpendicular” when we’re talking about two-dimensional space. If two vectors are perpendicular, that means they sit at a 90º angle to one another.
Read MoreWe’ve talked about changing bases from the standard basis to an alternate basis, and vice versa. Now we want to talk about a specific kind of basis, called an orthonormal basis, in which every vector in the basis is both 1 unit in length and orthogonal to each of the other basis vectors.
Read MoreIn other words, up to now, plotting points has always been done using the standard basis vectors i and j, or i, j, and k in three dimensions. Even when we were originally learning to plot (3,4) back in an introductory Algebra class, and we knew nothing about vectors, we were really learning to plot 3i+4j in terms of the standard basis vectors, we just didn’t know it yet. In this lesson, we want to see what it looks like to define points using different basis vectors. In other words, instead of using i=(1,0) and j=(0,1), can we use different vectors as the basis instead?
Read MoreIn this lesson we want to talk about the dimensionality of a vector set, which we should start by saying is totally different than the dimensions of a matrix. For now let’s just say that the dimension of a vector space is given by the number of basis vectors required to span that space.
Read MoreUpper triangular matrices are matrices in which all entries below the main diagonal are 0. The main diagonal is the set of entries that run from the upper left-hand corner of the matrix down to the lower right-hand corner of the matrix. Lower triangular matrices are matrices in which all entries above the main diagonal are 0.
Read MoreWe know already how to solve systems of linear equations using substitution, elimination, and graphing. This time, we want to talk about how to solve systems using inverse matrices. To walk through this, let’s use a simple system.
Read MoreThe inverse of an invertible linear transformation T is also itself a linear transformation. Which means that the inverse transformation is closed under addition and closed under scalar multiplication. In other words, as long as the original transformation T is a linear transformation itself, and is invertible (its inverse is defined, you can find its inverse), then the inverse of the inverse, T, is also a linear transformation.
Read MoreAny vector v that satisfies T(v)=(lambda)(v) is an eigenvector for the transformation T, and lambda is the eigenvalue that’s associated with the eigenvector v. The transformation T is a linear transformation that can also be represented as T(v)=A(v).
Read More