Transcription
Matrix-Vector Multiplication byMapReduceFrom Rajaraman / Ullman- Ch.2Part 1
Google implementation of MapReduce created to execute very large matrix-vectormultiplications When ranking of Web pages that goes on atsearch engines, n is in the tens of billions. Page Rank- iterative algorithm Also, useful for simple (memory-based)recommender systems
Problem Statement Given, n n matrix M, whose element in row i and column j will bedenoted πππ . a vector v of length n.β π₯π Οπ1 πππ . π£π Assume thatβ the row-column coordinates of each matrix element will bediscoverable, either from its position in the file, or because it isstored with explicit coordinates, as a triple (i, j, πππ ).β the position of element π£π in the vector v will be discoverable inthe analogous way.
Case 1: n is large, but not so large thatvector v cannot fit in main memory The Map Function: Map function is written to apply to one element of M v is first read to computing node executing a Map taskand is available for all applications of the Map functionat this compute node. Each Map task will operate on a chunk of the matrix M. From each matrix element πππ it produces the keyvalue pair (i, ( πππ . π£π ) ). Thus, all terms of the sum that make up thecomponent π₯π of the matrix-vector product will get thesame key, i.
Case 1 Continued The Reduce Function: The Reduce function simply sums all thevalues associated with a given key i. The result will be a pair (i,π₯π ).
Case 2: n is large to fit into mainmemory v should be stored in computing nodes used forthe Map task Divide the matrix into vertical stripes of equalwidth and divide the vector into an equal numberof horizontal stripes, of the same height. Our goal is to use enough stripes so that theportion of the vector in one stripe can fitconveniently into main memory at a computenode.
Matrix Mvector vFigure 2.4: Division of a matrix and vector into five stripes The ith stripe of the matrix multiplies only components from the ith stripe of the vector. Divide the matrix into one file for each stripe, and do the same for the vector. Each Map task is assigned a chunk from one of the stripes of the matrix and gets the entirecorresponding stripe of the vector.The Map and Reduce tasks can then act exactly as was described, as case 1.
Recommender is one of the most popularlarge-scale machine learning techniques.β Amazonβ eBayβ Facebookβ Netflixβ
Two types of recommender techniquesβ Collaborative Filteringβ Content Based Recommendation Collaborative Filteringβ Model basedβ Memory based User similarity based Item similarity based
Item-based collaborative filtering Basic idea:β Use the similarity between items (and not users)to make predictions Example:Item2thatItem3Item4 toItem5β LookItem1for itemsare similarItem5Alice5344?β Take Alice'sratingsfortheseitems3 to predict theUser13123rating4for Item5User23435User333154User415521
The cosine similarity measure
Making predictions A common prediction function: u β user; i,p - items; user has not rated item p.Neighborhood size is typically also limited to a specific sizeNot all neighbors are taken into account for the predictionAn analysis of the MovieLens dataset indicates that "inmost real-world situations, a neighborhood of 20 to 50neighbors seems reasonable" (Herlocker et al. 2002)
Pre-processing for item-based filtering Item-based filtering does not solve the scalability problem itself Pre-processing approach by Amazon.com (in 2003)β Calculate all pair-wise item similarities in advanceβ The neighborhood to be used at run-time is typically rather small,because only items are taken into account which the user has ratedβ Item similarities are supposed to be more stable than user similarities Memory requirementsβ Up to N2 pair-wise similarities to be memorized (N number of items)in theoryβ In practice, this is significantly lower (items with no co-ratings)β Further reductions possible Minimum threshold for co-ratings Limit the neighborhood size (might affect recommendation accuracy)
Mahoutβs Item-Based RSβ Apache Mahout is an open source ApacheFoundation project for scalable machine learning.β Mahout uses Map-Reduce paradigm for scalablerecommendation14
Mahoutβs RSMatrix Multiplication for Preference 5220345351020260456242090705Item-Item SimilarityMatrixActive UserβsPreference Vector15
As matrix math, againInside-out method5916513530193251522019523235106024207016
Page Rank Algorithm One iteration of the PageRank algorithminvolves taking an estimated Page-Rank vectorv and computing the next estimate vβ² byvβ² Ξ²Mv (1 Ξ²)e/n Ξ² is a constant slightly less than 1, e is a vectorof all 1βs, and n is the number of nodes in the graph thattransition matrix M represents.
Vectors v and vβ are too large for MainMemory If using the stripe method to partition a matrix andvector that do not fit in main memory, then a verticalstripe from the matrix M and a horizontal stripe from thevector v will contribute to all components of the resultvector vβ. Since that vector is the same length as v, it will not fit inmain memory either. M is stored column-by-column for efficiency reasons, acolumn can affect any of the components of vβ. As aresult, it is unlikely that when we need to add a term tosome component πβ²π , that component will already be inmain memory.β Thrashing
Square Blocks, instead of Slices An additional constraint is that the resultvector should be partitioned in the same wayas the input vector, so the output maybecome the input for another iteration of thematrix-vector multiplication. An alternative strategy is based onpartitioning the matrix into π 2 blocks, whilethe vectors are still partitioned into k stripes.
Figure from Rajaraman, Ullmann
Iterative Computation- Square BlocksMethod In this method, we use π 2 Map tasks. Each task gets onesquare of the matrix M, say πππ , and one stripe of thevector v, which must be ππ . Notice that each stripe of thevector is sent to k different Map tasks; ππ is sent to the taskhandling πππ for each of the k possible values of i. Thus, v istransmitted over the network k times. However, each pieceof the matrix is sent only once. The advantage of this approach is that we can keep boththe jth stripe of v and the ith stripe of vβ in main memory aswe process πππ . Note that all terms generated from πππand ππ contribute to πβ²π and no other stripe of vβ.
Matrix-Matrix Multiplication usingMapReduce Like Natural Join Operation
The Alternating Least Squares (ALS)Recommender Algorithm
Matrix A represents a typical user-product-ratingmodel The algorithm tries to derive a set of latentfactors (like tastes) from the user-product-ratingmatrix from matrix A. So matrix X can be viewed as user-taste matrixAnd matrix Y can be viewed as a product-tastematrix
Objective Function1πππππππ§π π΄ ππ π 22 Only check observed ratingsπππππππ§π1 (ππ,π π₯π2ππ¦π )2(π,π) Ξ© If we fix X, the objective function becomes a convex.Then we can calculate the gradient to find Y If we fix Y, we can calculate the gradient to find X
Iterative Training Algorithm Given Y, you can Solve XXi Aππ 1 (ππ 1 Tππ 1 )-1 The algorithm works in the following way:Y0 - X1 - Y1 - X2 - Y2 - Till the squared difference is small enough
Case 1: n is large, but not so large that vector v cannot fit in main memory The Map Function: Map function is written to apply to one element of M v is first read to computing node executing a Map task and is available for all applications of the Map function