Paper: Graph Twiddling in a MapReduce World

Today, at the lab seminar I presented the paper “Graph Twiddling in a MapReduce World” published in IEEE Computing in Science & Engineering. This paper addresses an investigation into the feasibility of decomposion graph operations into a series of MapReduce processes. In this post, I’m going to discuss this paper briefly.

As I mentioned above, this paper discusses the feasibility of decompositing graph operations into a series of MapReduce processes. As you know, the MapReduce has been gaining attentions in various applications that cope with large-scale datasets. However, to the best of my knowledge there have been no studies for dealing with graphs on MapReduce. This paper proposes several operations as follows:

  • Augmenting Edges with Degrees
  • Simplifying the Graph
  • Enumerating Triangles
  • Enumerating Rectangles
  • Finding Trusses
  • Barycentric Clustering
  • Finding Components

Some operations are performed in combination with other operations. Actually, some of them are very easy problems if they can traverse graphs. However, as the author said, traversing graphs with MapReduce is very inefficient (i.e., causing many MapReduce iterations) because a mapper reads only a record randomly for each map operation. Anyway, all the operations that the paper proposed avoid traversing graphs. Instead, their common pattern in graph algorithms proposed is as follows:

  1. A map operation: Read and process all the edges (or vertex) or changing some piece of edge (or vertex) information. Then, result in records by vertex as key.
  2. A reduce oprtation: For each record obtained from the previous map operation, read and determine the updated state of vertex or edge; emit this information in partially (or locally) updated records. Then, results in them.
  3. A reduce opration: For each record from the previous reduce operation, combine the updates globally and complete updated information.

Discussion

Even though this paper proposes several graph operations, they are still unnatural owing to too many MapReduce iterations; to the best of my knowledge, each MapReduce job’s initializing cost is very expensive. It is because mapper only can read record sequentially. The proposed graph operations based on MapReduce will cause the overhead of both MR iteration and communication. As a result, the feasible primitive graph operations with MapReduce are very limited. In addition, there are evidences to show the MapReduce is not suited to graph operations, but I will state them later.

Therefore, I think that a new programming model for graph (or complexity data) are needed. Ideally, the new programming model for graph must support graph traversing. In addition, data are needed to be preserved in locality in regards with their connectivity although data are distributed across a number of data nodes. Actually, basing these ideas I’m concreting “Hamburg: A New Programming Model for Graph Data” inspired by a blog post “Large-scale Graph Computing at Google

References


5 Comments on “Paper: Graph Twiddling in a MapReduce World”

  1. chung says:

    Sounds interesting..How's it relevant to your previous paper – CIKM ?

  2. Hyunsik Choi says:

    This paper and my previous work has a common point when it comes to dealing graph data on MapReduce. Recently, I'm thinking that MapReduce is inappropriate to graph processing although MapReduce is very scalable in many cases. After this paper, I become more convinced of that thinking.

  3. Hyunsik Choi says:

    reply 테스트 입니다.

  4. ms says:

    Hi, the link to the paper doesn't work.

  5. Hyunsik Choi says:

    The changed link is the below link. But, you need to have a login id in order to access the paper.http://ieeexplore.ieee.org/xpls/abs_all.jsp?arn


Leave a comment