Misplaced Pages

Karmarkar–Karp bin packing algorithms

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.
(Redirected from Karmarkar-Karp bin packing algorithms) Set of related approximation algorithms for the bin packing problem
This article has multiple issues. Please help improve it or discuss these issues on the talk page. (Learn how and when to remove these messages)
This article needs additional citations for verification. Please help improve this article by adding citations to reliable sources. Unsourced material may be challenged and removed.
Find sources: "Karmarkar–Karp bin packing algorithms" – news · newspapers · books · scholar · JSTOR (March 2022) (Learn how and when to remove this message)
This article includes a list of general references, but it lacks sufficient corresponding inline citations. Please help to improve this article by introducing more precise citations. (March 2022) (Learn how and when to remove this message)
(Learn how and when to remove this message)

The Karmarkar–Karp (KK) bin packing algorithms are several related approximation algorithm for the bin packing problem. The bin packing problem is a problem of packing items of different sizes into bins of identical capacity, such that the total number of bins is as small as possible. Finding the optimal solution is computationally hard. Karmarkar and Karp devised an algorithm that runs in polynomial time and finds a solution with at most O P T + O ( log 2 ( O P T ) ) {\displaystyle \mathrm {OPT} +{\mathcal {O}}(\log ^{2}(OPT))} bins, where OPT is the number of bins in the optimal solution. They also devised several other algorithms with slightly different approximation guarantees and run-time bounds.

The KK algorithms were considered a breakthrough in the study of bin packing: the previously-known algorithms found multiplicative approximation, where the number of bins was at most r O P T + s {\displaystyle r\cdot \mathrm {OPT} +s} for some constants r > 1 , s > 0 {\displaystyle r>1,s>0} , or at most ( 1 + ε ) O P T + 1 {\displaystyle (1+\varepsilon )\mathrm {OPT} +1} . The KK algorithms were the first ones to attain an additive approximation.

Input

The input to a bin-packing problem is a set of items of different sizes, a1,...an. The following notation is used:

  • n - the number of items.
  • m - the number of different item sizes. For each i in 1,...,m:
    • si is the i-th size;
    • ni is the number of items of size si.
  • B - the bin size.

Given an instance I, we denote:

  • OPT(I) = the optimal solution of instance I.
  • FOPT(I) = (a1+...+an)/B = the theoretically-optimal number of bins, when all bins are completely filled with items or item fractions.

Obviously, FOPT(I) ≤ OPT(I).

High-level scheme

The KK algorithms essentially solve the configuration linear program:

minimize     1 x       s.t.     A x n       and     x 0       and     x   is an integer   {\displaystyle {\text{minimize}}~~\mathbf {1} \cdot \mathbf {x} ~~~{\text{s.t.}}~~A\mathbf {x} \geq \mathbf {n} ~~~{\text{and}}~~\mathbf {x} \geq 0~~~{\text{and}}~~\mathbf {x} ~{\text{is an integer}}~} .

Here, A is a matrix with m rows. Each column of A represents a feasible configuration - a multiset of item-sizes, such that the sum of all these sizes is at most B. The set of configurations is C. x is a vector of size C. Each element xc of x represents the number of times configuration c is used.

  • Example: suppose the item sizes are 3,3,3,3,3,4,4,4,4,4, and B=12. Then there are C=10 possible configurations: 3333; 333; 33, 334; 3, 34, 344; 4, 44, 444. The matrix A has two rows: for s=3 and for s=4. The vector n is since there are 5 items of each size. A possible optimal solution is x=, corresponding to using three bins with configurations 3333, 344, 444.

There are two main difficulties in solving this problem. First, it is an integer linear program, which is computationally hard to solve. Second, the number of variables is C - the number of configurations, which may be enormous. The KK algorithms cope with these difficulties using several techniques, some of which were already introduced by de-la-Vega and Lueker. Here is a high-level description of the algorithm (where I {\displaystyle I} is the original instance):

  • 1-a. Let J {\displaystyle J} be an instance constructed from I {\displaystyle I} by removing small items.
    • 2-a. Let K {\displaystyle K} be an instance constructed from J {\displaystyle J} by grouping items and rounding the size of items in each group to the highest item in the group.
      • 3-a. Construct the configuration linear program for K {\displaystyle K} , without the integrality constraints.
        • 4. Compute a (fractional) solution x for the relaxed linear program.
      • 3-b. Round x to an integral solution for K {\displaystyle K} .
    • 2-b. "Un-group" the items to get a solution for J {\displaystyle J} .
  • 1-b. Add the small items to get a solution for I {\displaystyle I} .

Below, we describe each of these steps in turn.

Step 1. Removing and adding small items

The motivation for removing small items is that, when all items are large, the number of items in each bin must be small, so the number of possible configurations is (relatively) small. We pick some constant g ( 0 , 1 ) {\displaystyle g\in (0,1)} , and remove from the original instance I {\displaystyle I} all items smaller than g B {\displaystyle g\cdot B} . Let J {\displaystyle J} be the resulting instance. Note that in J {\displaystyle J} , each bin can contain at most 1 / g {\displaystyle 1/g} items. We pack J {\displaystyle J} and get a packing with some b J {\displaystyle b_{J}} bins.

Now, we add the small items into the existing bins in an arbitrary order, as long as there is room. When there is no more room in the existing bins, we open a new bin (as in next-fit bin packing). Let b I {\displaystyle b_{I}} be the number of bins in the final packing. Then:

b I max ( b J , ( 1 + 2 g ) O P T ( I ) + 1 ) {\displaystyle b_{I}\leq \max(b_{J},(1+2g)\cdot OPT(I)+1)} .

Proof. If no new bins are opened, then the number of bins remains b J {\displaystyle b_{J}} . If a new bin is opened, then all bins except maybe the last one contain a total size of at least B g B {\displaystyle B-g\cdot B} , so the total instance size is at least ( 1 g ) B ( b I 1 ) {\displaystyle (1-g)\cdot B\cdot (b_{I}-1)} . Therefore, F O P T ( 1 g ) ( b I 1 ) {\displaystyle FOPT\geq (1-g)\cdot (b_{I}-1)} , so the optimal solution needs at least ( 1 g ) ( b I 1 ) {\displaystyle (1-g)\cdot (b_{I}-1)} bins. So b I O P T / ( 1 g ) + 1 = ( 1 + g + g 2 + ) O P T + 1 ( 1 + 2 g ) O P T + 1 {\displaystyle b_{I}\leq OPT/(1-g)+1=(1+g+g^{2}+\ldots )OPT+1\leq (1+2g)OPT+1} . In particular, by taking g=1/n, we get:

b I max ( b J , O P T + 2 O P T ( I ) / n + 1 ) max ( b J , O P T + 3 ) {\displaystyle b_{I}\leq \max(b_{J},OPT+2\cdot OPT(I)/n+1)\leq \max(b_{J},OPT+3)} ,

since O P T ( I ) n {\displaystyle OPT(I)\leq n} . Therefore, it is common to assume that all items are larger than 1/n.

Step 2. Grouping and un-grouping items

The motivation for grouping items is to reduce the number of different item sizes, to reduce the number of constraints in the configuration LP. The general grouping process is:

  • Order the items by descending size.
  • Partition the items into groups.
  • For each group, modify the size of all items in the group to the largest size in the group.

There are several different grouping methods.

Linear grouping

Let k > 1 {\displaystyle k>1} be an integer parameter. Put the largest k {\displaystyle k} items in group 1; the next-largest k {\displaystyle k} items in group 2; and so on (the last group might have fewer than k {\displaystyle k} items). Let J {\displaystyle J} be the original instance. Let K {\displaystyle K'} be the first group (the group of the k {\displaystyle k} largest items), and K {\displaystyle K} the grouped instance without the first group. Then:

  • In K {\displaystyle K'} all items have the same size. In K {\displaystyle K} the number of different sizes is m ( K ) n / k + 1 {\displaystyle m(K)\leq n/k+1} .
  • O P T ( K ) O P T ( J ) {\displaystyle OPT(K)\leq OPT(J)} - since group 1 in J {\displaystyle J} dominates group 2 in K {\displaystyle K} (all k items in group 1 are larger than the k items in group 2); similarly, group 2 in J {\displaystyle J} dominates group 3 in K {\displaystyle K} , etc.
  • O P T ( K ) k {\displaystyle OPT(K')\leq k} - since it is possible to pack each item in K {\displaystyle K'} into a single bin.

Therefore, O P T ( J ) O P T ( K K ) O P T ( K ) + O P T ( K ) O P T ( K ) + k {\displaystyle OPT(J)\leq OPT(K\cup K')\leq OPT(K)+OPT(K')\leq OPT(K)+k} . Indeed, given a solution to K {\displaystyle K} with b K {\displaystyle b_{K}} bins, we can get a solution to J {\displaystyle J} with at most b K + k {\displaystyle b_{K}+k} bins.

Geometric grouping

Let k > 1 {\displaystyle k>1} be an integer parameter. Geometric grouping proceeds in two steps:

  • Partition the instance J {\displaystyle J} into several instances J 0 , J 1 , {\displaystyle J_{0},J_{1},\ldots } such that, in each instance J r {\displaystyle J_{r}} , all sizes are in the interval [ B / 2 r + 1 , B / 2 r ) {\displaystyle [B/2^{r+1},B/2^{r})} . Note that, if all items in J {\displaystyle J} have size at least g B {\displaystyle g\cdot B} , then the number of instances is at most log 2 ( 1 / g ) {\displaystyle \log _{2}(1/g)} .
  • On each instance J r {\displaystyle J_{r}} , perform linear rounding with parameter k 2 r {\displaystyle k\cdot 2^{r}} . Let K r , K r {\displaystyle K_{r},K'_{r}} be the resulting instances. Let K := r K r {\displaystyle K:=\cup _{r}K_{r}} and K := r K r {\displaystyle K':=\cup _{r}K'_{r}} .

Then, the number of different sizes is bounded as follows:

  • For all r, m ( K r ) = 1 {\displaystyle m(K'_{r})=1} and m ( K r ) n ( J r ) / ( k 2 r ) + 1 {\displaystyle m(K_{r})\leq n(J_{r})/(k\cdot 2^{r})+1} . Since all items in J r {\displaystyle J_{r}} are larger than B / 2 r + 1 {\displaystyle B/2^{r+1}} , we have n ( J r ) 2 r + 1 F O P T ( J r ) {\displaystyle n(J_{r})\leq 2^{r+1}\cdot FOPT(J_{r})} , so m ( K r ) 2 F O P T ( J r ) / k + 1 {\displaystyle m(K_{r})\leq 2FOPT(J_{r})/k+1} . Summing over all r gives m ( K ) 2 F O P T ( J ) / k + log 2 ( 1 / g ) {\displaystyle m(K)\leq 2FOPT(J)/k+\log _{2}(1/g)} .

The number of bins is bounded as follows:

  • For all r, O P T ( K r ) k {\displaystyle OPT(K'_{r})\leq k} - since K r {\displaystyle K'_{r}} has k 2 r {\displaystyle k\cdot 2^{r}} items, and all of them are smaller than B / 2 r {\displaystyle B/2^{r}} , so they can be packed into at most k {\displaystyle k} bins.
  • Therefore, O P T ( K ) k log 2 ( 1 / g ) {\displaystyle OPT(K')\leq k\cdot \log _{2}(1/g)} .
  • Therefore, O P T ( J ) O P T ( K ) + O P T ( K ) O P T ( K ) + k log 2 ( 1 / g ) {\displaystyle OPT(J)\leq OPT(K)+OPT(K')\leq OPT(K)+k\cdot \log _{2}(1/g)} .

Alternative geometric grouping

Let k > 1 {\displaystyle k>1} be an integer parameter. Order the items by descending size. Partition them into groups such that the total size in each group is at least k B {\displaystyle k\cdot B} . Since the size of each item is less than B, The number of items in each group is at least k + 1 {\displaystyle k+1} . The number of items in each group is weakly-increasing. If all items are larger than g B {\displaystyle g\cdot B} , then the number of items in each group is at most k / g {\displaystyle k/g} . In each group, only the larger items are rounded up. This can be done such that:

  • m ( K ) F O P T ( J ) / k + ln ( 1 / g ) {\displaystyle m(K)\leq FOPT(J)/k+\ln(1/g)} .
  • O P T ( J ) O P T ( K ) + 2 k ( 2 + ln ( 1 / g ) ) {\displaystyle OPT(J)\leq OPT(K)+2k\cdot (2+\ln(1/g))} .

Step 3. Constructing the LP and rounding the solution

We consider the configuration linear program without the integrality constraints:

minimize     1 x       s.t.     A x n       and     x 0 {\displaystyle {\text{minimize}}~~\mathbf {1} \cdot \mathbf {x} ~~~{\text{s.t.}}~~A\mathbf {x} \geq \mathbf {n} ~~~{\text{and}}~~\mathbf {x} \geq 0} .

Here, we are allowed to use a fractional number of each configuration.

  • Example: suppose there are 31 items of size 3 and 7 items of size 4, and the bin-size is 10. The configurations are: 4, 44, 34, 334, 3, 33, 333. The constraints are *x=31 and *x=7. An optimal solution to the fractional LP is That is: there are 7 bins of configuration 334 and 17/3 bins of configuration 333. Note that only two different configurations are needed.

Denote the optimal solution of the linear program by LOPT. The following relations are obvious:

  • FOPT(I) ≤ LOPT(I), since FOPT(I) is the (possibly fractional) number of bins when all bins are completely filled with items or fractions of items. Clearly, no solution can be more efficient.
  • LOPT(I) ≤ OPT(I), since LOPT(I) is a solution to a minimization problem with fewer constraints.
  • OPT(I) < 2*FOPT(I), since in any packing with at least 2*FOPT(I) bins, the sum of the two least-full bins is at most B, so they can be combined into a single bin.

A solution to the fractional LP can be rounded to an integral solution as follows.

Suppose we have a solution x to the fractional LP. We round x into a solution for the integral ILP as follows.

  • Let x be an optimal basic feasible solution of the fractional LP. Suppose it as c C x c = b L {\displaystyle \sum _{c\in C}x_{c}=b_{L}} bins (note that b L {\displaystyle b_{L}} may be a fractional number). Since the fractional LP has m constraints (one for each distinct size), x has at most m nonzero variables, that is, at most m different configurations are used. We construct from x an integral packing consisting of a principal part and a residual part.
  • The principal part contains floor(xc) bins of each configuration c for which xc > 0.
  • For the residual part (denoted by R), we construct two candidate packings:
    • A single bin of each configuration c for which xc > 0; all in all m bins are needed.
    • A greedy packing, with fewer than 2*FOPT(R) bins (since if there are at least 2*FOPT(R) bins, the two smallest ones can be combined).
  • The smallest of these packings requires min(m, 2*FOPT(R)) ≤ average(m, 2*FOPT(R)) = FOPT(R) + m/2.
  • Adding to this the rounded-down bins of the principal part yields at most b L + m / 2 {\displaystyle b_{L}+m/2} bins.
  • The execution time of this conversion algorithm is O(n log n).

This also implies that O P T ( I ) L O P T ( I ) + m / 2 {\displaystyle OPT(I)\leq LOPT(I)+m/2} .

Step 4. Solving the fractional LP

The main challenge in solving the fractional LP is that it may have a huge number of variables - a variable for each possible configuration.

The dual LP

The dual linear program of the fractional LP is:

maximize     n y       s.t.     A T y 1       and     y 0 {\displaystyle {\text{maximize}}~~\mathbf {n} \cdot \mathbf {y} ~~~{\text{s.t.}}~~A^{T}\mathbf {y} \leq \mathbf {1} ~~~{\text{and}}~~\mathbf {y} \geq 0} .

It has m variables y 1 , , y m {\displaystyle y_{1},\ldots ,y_{m}} , and C constraints - a constraint for each configuration. It has the following economic interpretation. For each size s, we should determine a nonnegative price y i {\displaystyle y_{i}} . Our profit is the total price of all items. We want to maximize the profit n y subject to the constraints that the total price of items in each configuration is at most 1. This LP now has only m variables, but a huge number of constraints. Even listing all the constraints is infeasible.

Fortunately, it is possible to solve the problem up to any given precision without listing all the constraints, by using a variant of the ellipsoid method. This variant gets as input, a separation oracle: a function that, given a vector y ≥ 0, returns one of the following two options:

  • Assert that y is feasible, that is, A T y 1 {\displaystyle A^{T}\mathbf {y} \leq \mathbf {1} } ; or -
  • Assert that y is infeasible, and return a specific constraint that is violated, that is, a vector a such that a y > 1 {\displaystyle \mathbf {a} \cdot \mathbf {y} >1} .

The ellipsoid method starts with a large ellipsoid, that contains the entire feasible domain A T y 1 {\displaystyle A^{T}\mathbf {y} \leq \mathbf {1} } . At each step t, it takes the center y t {\displaystyle \mathbf {y} _{t}} of the current ellipsoid, and sends it to the separation oracle:

  • If the oracle says that y t {\displaystyle \mathbf {y} _{t}} is feasible, then we do an "optimality cut": we cut from the ellipsoid all points y for which n y < n y t {\displaystyle \mathbf {n} \cdot \mathbf {y} <\mathbf {n} \cdot \mathbf {y} _{t}} . These points are definitely not optimal.
  • If the oracle says that y t {\displaystyle \mathbf {y} _{t}} is infeasible and violates the constraint a, then we do a "feasibility cut": we cut from the ellipsoid all points y for which a y > 1 {\displaystyle \mathbf {a} \cdot \mathbf {y} >1} . These points are definitely not feasible.

After making a cut, we construct a new, smaller ellipsoid. It can be shown that this process converges to an approximate solution, in time polynomial in the required accuracy.

A separation oracle for the dual LP

We are given some m non-negative numbers y 1 , , y m {\displaystyle y_{1},\ldots ,y_{m}} . We have to decide between the following two options:

  • For every feasible configuration, the sum of y i {\displaystyle y_{i}} corresponding to this configuration is at most 1; this means that y is feasible.
  • There exists a feasible configuration for which the sum of y i {\displaystyle y_{i}} is larger than 1; this means that y is infeasible. In this case, we also have to return the configuration.

This problem can be solved by solving a knapsack problem, where the item values are y 1 , , y m {\displaystyle y_{1},\ldots ,y_{m}} , the item weights are s 1 , , s m {\displaystyle s_{1},\ldots ,s_{m}} , and the weight capacity is B (the bin size).

  • If the total value of the optimal knapsack solution is at most 1, then we say that y is feasible.
  • If the total value of the optimal knapsack solution is larger than 1, then we say that y is infeasible, and the items in the optimal knapsack solution correspond to a configuration that violates a constraint (since a y > 1 {\displaystyle \mathbf {a} \cdot \mathbf {y} >1} for the vector a that corresponds to this configuration).

The knapsack problem can be solved by dynamic programming in pseudo-polynomial time: O ( m V ) {\displaystyle O(m\cdot V)} , where m is the number of inputs and V is the number of different possible values. To get a polynomial-time algorithm, we can solve the knapsack problem approximately, using input rounding. Suppose we want a solution with tolerance δ {\displaystyle \delta } . We can round each of y 1 , , y m {\displaystyle y_{1},\ldots ,y_{m}} down to the nearest multiple of δ {\displaystyle \delta } /n. Then, the number of possible values between 0 and 1 is n/ δ {\displaystyle \delta } , and the run-time is O ( m n / δ ) {\displaystyle O(mn/\delta )} . The solution is at least the optimal solution minus δ {\displaystyle \delta } /n.

Ellipsoid method with an approximate separation oracle

The ellipsoid method should be adapted to use an approximate separation oracle. Given the current ellipsoid center y t {\displaystyle \mathbf {y} _{t}} :

  • If the approximate oracle returns a solution with value larger than 1, then y t {\displaystyle \mathbf {y} _{t}} is definitely infeasible, and the solution correspond to a configuration that violates a constraint a. We do a "feasibility cut" in y t {\displaystyle \mathbf {y} _{t}} , cutting the ellipsoid all points y for which a y > 1 {\displaystyle \mathbf {a} \cdot \mathbf {y} >1} .
  • If the approximate oracle returns a solution with value at most 1, then y t {\displaystyle \mathbf {y} _{t}} may or may not be feasible, but y t {\displaystyle \mathbf {y} _{t}} rounded down (denote it by z t {\displaystyle \mathbf {z} _{t}} ) is feasible. By definition of the rounding, we know that n z t n y t n 1 ( δ / n ) = n y t δ {\displaystyle \mathbf {n} \cdot \mathbf {z} _{t}\geq \mathbf {n} \cdot \mathbf {y} _{t}-\mathbf {n} \cdot \mathbf {1} \cdot (\delta /n)=\mathbf {n} \cdot \mathbf {y} _{t}-\delta } . We still do an "optimality cut" in y t {\displaystyle \mathbf {y} _{t}} : we cut from the ellipsoid all points y for which n y < n y t {\displaystyle \mathbf {n} \cdot \mathbf {y} <\mathbf {n} \cdot \mathbf {y} _{t}} . Note that y t {\displaystyle \mathbf {y} _{t}} might be infeasible, so its value might be larger than OPT. Therefore, we might remove some points whose objective is optimal. However, the removed points satisfy n y < n z t + δ OPT + δ {\displaystyle \mathbf {n} \cdot \mathbf {y} <\mathbf {n} \cdot \mathbf {z} _{t}+\delta \leq {\text{OPT}}+\delta } ; no point is removed if its value exceeds the value at z t {\displaystyle \mathbf {z} _{t}} by more than δ {\displaystyle \delta } .

Using the approximate separation oracle gives a feasible solution y* to the dual LP, with n y L O P T δ {\displaystyle \mathbf {n} \cdot \mathbf {y^{*}} \geq LOPT-\delta } , after at most Q {\displaystyle Q} iterations, where Q = 4 m 2 ln ( m n / g δ ) {\displaystyle Q=4m^{2}\ln(mn/g\delta )} . The total run-time of the ellipsoid method with the approximate separation oracle is O ( Q m n / δ ) {\displaystyle O(Qmn/\delta )} .

Eliminating constraints

During the ellipsoid method, we use at most Q constraints of the form a y 1 {\displaystyle \mathbf {a} \cdot \mathbf {y} \leq 1} . All the other constraints can be eliminated, since they have no effect on the outcome y* of the ellipsoid method. We can eliminate even more constraints. It is known that, in any LP with m variables, there is a set of m constraints that is sufficient for determining the optimal solution (that is, the optimal value is the same even if only these m constraints are used). We can repeatedly run the ellipsoid method as above, each time trying to remove a specific set of constraints. If the resulting error is at most δ {\displaystyle \delta } , then we remove these constraints permanently. It can be shown that we need at most ( Q / m ) + m ln ( Q / m ) {\displaystyle \approx (Q/m)+m\ln(Q/m)} eliminations, so the accumulating error is at most δ [ ( Q / m ) + m ln ( Q / m ) ] {\displaystyle \approx \delta \cdot } . If we try sets of constraints deterministically, then in the worst case, one out of m trials succeeds, so we need to run the ellipsoid method at most m [ ( Q / m ) + m ln ( Q / m ) ] = Q + m 2 ln ( Q / m ) {\displaystyle \approx m\cdot =Q+m^{2}\ln(Q/m)} times. If we choose the constraints to remove at random, then the expected number of iterations is O ( m ) [ 1 + ln ( Q / m ) ] {\displaystyle O(m)\cdot } .

Finally, we have a reduced dual LP, with only m variables and m constraints. The optimal value of the reduced LP is at least L O P T h {\displaystyle LOPT-h} , where h δ [ ( Q / m ) + m ln ( Q / m ) ] {\displaystyle h\approx \delta \cdot } .

Solving the primal LP

By the LP duality theorem, the minimum value of the primal LP equals the maximum value of the dual LP, which we denoted by LOPT. Once we have a reduced dual LP, we take its dual, and take a reduced primal LP. This LP has only m variables - corresponding to only m out of C configurations. The maximum value of the reduced dual LP is at least L O P T h {\displaystyle LOPT-h} . It can be shown that the optimal solution of the reduced primal LP is at most L O P T + h {\displaystyle LOPT+h} . The solution gives a near-optimal bin packing, using at most m configurations.

The total run-time of the deterministic algorithm, when all items are larger than g B {\displaystyle g\cdot B} , is:

O ( Q m n δ ( Q + m 2 ln Q m ) ) = O ( Q 2 m n + Q m 3 n ln Q m δ ) O ( m 8 ln m ln 2 ( m n g h ) + m 4 n ln m h ln ( m n g h ) ) {\displaystyle O\left({\frac {Qmn}{\delta }}\cdot (Q+m^{2}\ln {\frac {Q}{m}})\right)=O\left({\frac {Q^{2}mn+Qm^{3}n\ln {\frac {Q}{m}}}{\delta }}\right)\approx O\left(m^{8}\ln {m}\ln ^{2}({\frac {mn}{gh}})+{\frac {m^{4}n\ln {m}}{h}}\ln({\frac {mn}{gh}})\right)} ,

The expected total run-time of the randomized algorithm is: O ( m 7 log m log 2 ( m n g h ) + m 4 n log m h log ( m n g h ) ) {\displaystyle O\left(m^{7}\log {m}\log ^{2}({\frac {mn}{gh}})+{\frac {m^{4}n\log {m}}{h}}\log({\frac {mn}{gh}})\right)} .

End-to-end algorithms

Karmarkar and Karp presented three algorithms, that use the above techniques with different parameters. The run-time of all these algorithms depends on a function T ( , ) {\displaystyle T(\cdot ,\cdot )} , which is a polynomial function describing the time it takes to solve the fractional LP with tolerance h=1, which is, for the deterministic version, T ( m , n ) O ( m 8 log m log 2 n + m 4 n log m log n ) {\displaystyle T(m,n)\in O(m^{8}\log {m}\log ^{2}{n}+m^{4}n\log {m}\log {n})} .

Algorithm 1

Let ϵ > 0 {\displaystyle \epsilon >0} be a constant representing the desired approximation accuracy.

  • 1-a. Set g = max ( 1 / n , ϵ / 2 ) {\displaystyle g=\max(1/n,\epsilon /2)} . Let J {\displaystyle J} be an instance constructed from I {\displaystyle I} by removing all items smaller than g.
    • 2-a. Set k = n ϵ 2 {\displaystyle k=n\cdot \epsilon ^{2}} . Let K {\displaystyle K} be an instance constructed from J {\displaystyle J} by linear grouping with parameter k, and let K {\displaystyle K'} be the remaining instance (the group of k largest items). Note that m ( K ) n / k + 1 1 / ϵ 2 {\displaystyle m(K)\leq n/k+1\approx 1/\epsilon ^{2}} .
      • 3-a. Construct the configuration linear program for K {\displaystyle K} , without the integrality constraints.
        • 4. Compute a solution x for K {\displaystyle K} , with tolerance h=1. The result is a fractional bin packing with b L L O P T ( K ) + 1 {\displaystyle b_{L}\leq LOPT(K)+1} bins. The run-time is T ( m ( K ) , n ( K ) ) T ( ϵ 2 , n ) {\displaystyle T(m(K),n(K))\leq T(\epsilon ^{-2},n)} .
      • 3-b. Round x to an integral solution for K {\displaystyle K} . Add at most m ( K ) / 2 {\displaystyle m(K)/2} bins for the fractional part. The total number of bins is b K b L + m ( K ) / 2 L O P T ( K ) + 1 + 1 / 2 ϵ 2 {\displaystyle b_{K}\leq b_{L}+m(K)/2\leq LOPT(K)+1+1/2\epsilon ^{2}} .
    • 2-b. Pack the items in K {\displaystyle K'} using at most k bins; get a packing of J {\displaystyle J} . The number of bins is b J b K + k L O P T ( K ) + 1 + 1 / 2 ϵ 2 + n ϵ 2 {\displaystyle b_{J}\leq b_{K}+k\leq LOPT(K)+1+1/2\epsilon ^{2}+n\cdot \epsilon ^{2}} .
  • 1-b. Add the items smaller than g to get a solution for I {\displaystyle I} . The number of bins is: b I max ( b J , ( 1 + 2 g ) O P T ( I ) + 1 ) ( 1 + ϵ ) O P T ( I ) + 1 / 2 ϵ 2 + 3 {\displaystyle b_{I}\leq \max(b_{J},(1+2g)\cdot OPT(I)+1)\leq (1+\epsilon )OPT(I)+1/2\epsilon ^{2}+3} .

All in all, the number of bins is in ( 1 + ϵ ) O P T + O ( ϵ 2 ) {\displaystyle (1+\epsilon )OPT+O(\epsilon ^{-2})} and the run-time is in O ( n log n + T ( ϵ 2 , n ) ) {\displaystyle O(n\log n+T(\epsilon ^{-2},n))} . By choosing ϵ = O P T 1 / 3 {\displaystyle \epsilon =OPT^{-1/3}} we get O P T + O ( O P T 2 / 3 ) {\displaystyle OPT+O(OPT^{2/3})} .

Algorithm 2

Let g > 0 {\displaystyle g>0} be a real parameter and k > 0 {\displaystyle k>0} an integer parameter to be determined later.

  • 1-a. Let J {\displaystyle J} be an instance constructed from I {\displaystyle I} by removing all items smaller than g.
  • 2. While F O P T ( J ) > 1 + k k 1 ln ( 1 / g ) {\displaystyle FOPT(J)>1+{\frac {k}{k-1}}\ln(1/g)} do:
    • 2-a. Do the Alternative Geometric Grouping with parameter k. Let K {\displaystyle K} be the resulting instance, and let K {\displaystyle K'} be the remaining instance. We have m ( K ) F O P T ( J ) / k + ln ( 1 / g ) {\displaystyle m(K)\leq FOPT(J)/k+\ln(1/g)} .
      • 3-a. Construct the configuration linear program for K {\displaystyle K} , without the integrality constraints.
        • 4. Compute a solution x for K {\displaystyle K} , with tolerance h=1. The result is a fractional bin packing with b L L O P T ( K ) + 1 {\displaystyle b_{L}\leq LOPT(K)+1} bins. The run-time is T ( m ( K ) , n ( K ) ) T ( F O P T ( J ) / k + ln ( 1 / g ) , n ) {\displaystyle T(m(K),n(K))\leq T(FOPT(J)/k+\ln(1/g),n)} .
      • 3-b. Round x to an integral solution for K {\displaystyle K} . Do not add bins for the fractional part. Instead, just remove the packed items from J {\displaystyle J} .
    • 2-b. Pack the items in K {\displaystyle K'} in at most 2 k ( 2 + ln ( 1 / g ) ) {\displaystyle 2k\cdot (2+\ln(1/g))} bins.
  • 2. Once F O P T ( J ) 1 + k k 1 ln ( 1 / g ) {\displaystyle FOPT(J)\leq 1+{\frac {k}{k-1}}\ln(1/g)} , pack the remaining items greedily into at most 2 F O P T ( J ) 2 + 2 k k 1 ln ( 1 / g ) {\displaystyle 2FOPT(J)\leq 2+{\frac {2k}{k-1}}\ln(1/g)} bins.
    • At each iteration of the loop in step 2, the fractional part of x has at most m(K) patterns, so F O P T ( J t + 1 ) m ( K t ) F O P T ( J t ) / k + ln ( 1 / g ) {\displaystyle FOPT(J_{t+1})\leq m(K_{t})\leq FOPT(J_{t})/k+\ln(1/g)} . The FOPT drops by a factor of k in each iteration, so the number of iterations is at most ln F O P T ( I ) ln k + 1 {\displaystyle {\frac {\ln FOPT(I)}{\ln k}}+1} .
    • Therefore, the total number of bins used for J {\displaystyle J} is: b J O P T ( I ) + [ 1 + ln F O P T ( I ) ln k ] [ 1 + 4 k + 2 k ln ( 1 / g ) ] + 2 + 2 k k 1 ln ( 1 / g ) {\displaystyle b_{J}\leq OPT(I)+\left\left+2+{\frac {2k}{k-1}}\ln(1/g)} .
  • 1-b. Add the items smaller than g to get a solution for I {\displaystyle I} . The number of bins is: b I max ( b J , ( 1 + 2 g ) O P T ( I ) + 1 ) {\displaystyle b_{I}\leq \max(b_{J},(1+2g)\cdot OPT(I)+1)} .

The run-time is in O ( n log n + T ( F O P T ( J ) / k + ln ( 1 / g ) , n ) ) {\displaystyle O(n\log n+T(FOPT(J)/k+\ln(1/g),n))} .

Now, if we choose k=2 and g=1/FOPT(I), we get:

b J O P T + O ( log 2 ( F O P T ) ) {\displaystyle b_{J}\leq OPT+O(\log ^{2}(FOPT))} ,

and hence:

b I max ( b J , O P T + 2 O P T / F O P T + 1 ) max ( b J , O P T + 5 ) O P T + log 2 ( O P T ) {\displaystyle b_{I}\leq \max(b_{J},OPT+2OPT/FOPT+1)\leq \max(b_{J},OPT+5)\in OPT+\log ^{2}(OPT)} ,

so the total number of bins is in O P T + O ( log 2 ( F O P T ) ) {\displaystyle OPT+O(\log ^{2}(FOPT))} . The run-time is O ( n log n ) + T ( F O P T / 2 + ln ( F O P T ) , n ) O ( n log n + T ( F O P T , n ) ) {\displaystyle O(n\log n)+T(FOPT/2+\ln(FOPT),n)\in O(n\log {n}+T(FOPT,n))} .

The same algorithm can be used with different parameters to trade-off run-time with accuracy. For some parameter α ( 0 , 1 ) {\displaystyle \alpha \in (0,1)} , choose k = F O P T α {\displaystyle k=FOPT^{\alpha }} and g = 1 / F O P T 1 α {\displaystyle g=1/FOPT^{1-\alpha }} . Then, the packing needs at most O P T + O ( O P T α ) {\displaystyle \mathrm {OPT} +{\mathcal {O}}(OPT^{\alpha })} bins, and the run-time is in O ( n log n + T ( F O P T ( 1 α ) , n ) ) {\displaystyle O(n\log {n}+T(FOPT^{(1-\alpha )},n))} .

Algorithm 3

The third algorithm is useful when the number of sizes m is small (see also high-multiplicity bin packing).

  • 1-a. Set g = log 2 ( m ) F O P T ( I ) {\displaystyle g={\frac {\log ^{2}(m)}{FOPT(I)}}} . Let K {\displaystyle K} be an instance constructed from I {\displaystyle I} by removing all items smaller than g.
  • If m ( K ) F O P T ( K ) {\displaystyle m(K)\leq FOPT(K)} then:
    • 3-a. Construct the configuration linear program for K {\displaystyle K} , without the integrality constraints.
      • 4. Compute a solution x for K {\displaystyle K} , with tolerance h=1. The result is a fractional bin packing with b L L O P T ( K ) + 1 {\displaystyle b_{L}\leq LOPT(K)+1} bins. The run-time is T ( m ( K ) , n ( K ) ) T ( ϵ 2 , n ) {\displaystyle T(m(K),n(K))\leq T(\epsilon ^{-2},n)} .
    • 3-b. Round x to an integral solution for K {\displaystyle K} . Do not add bins for the fractional part. Instead, just remove the packed items from K {\displaystyle K} .
  • Run step 2 of Algorithm 2 on the remaining pieces.
  • 1-b. Add the items smaller than g to get a solution for I {\displaystyle I} . The number of bins is: b I max ( b J , ( 1 + 2 g ) O P T ( I ) + 1 ) {\displaystyle b_{I}\leq \max(b_{J},(1+2g)\cdot OPT(I)+1)} .

It uses at most O P T + O ( log 2 m ) {\displaystyle \mathrm {OPT} +{\mathcal {O}}(\log ^{2}m)} bins, and the run-time is in O ( n log n + T ( m , n ) ) {\displaystyle O(n\log {n}+T(m,n))} .

Improvements

The KK techniques were improved later, to provide even better approximations.

Rothvoss uses the same scheme as Algorithm 2, but with a different rounding procedure in Step 2. He introduced a "gluing" step, in which small items are glued together to yield a single larger item. This gluing can be used to increase the smallest item size to about B / log 12 ( n ) {\displaystyle B/\log ^{12}(n)} . When all sizes are at least B / log 12 ( n ) {\displaystyle B/\log ^{12}(n)} , we can substitute g = 1 / log 12 ( n ) {\displaystyle g=1/\log ^{12}(n)} in the guarantee of Algorithm 2, and get:

b J O P T ( I ) + O ( log ( F O P T ) log ( log ( n ) ) ) {\displaystyle b_{J}\leq OPT(I)+O(\log(FOPT)\log(\log(n)))} ,

which yields a O P T + O ( log ( O P T ) log log ( O P T ) ) {\displaystyle \mathrm {OPT} +O(\log(\mathrm {OPT} )\cdot \log \log(\mathrm {OPT} ))} bins.

Hoberg and Rothvoss use a similar scheme in which the items are first packed into "containers", and then the containers are packed into bins. Their algorithm needs at most b J O P T ( I ) + O ( log ( O P T ) ) {\displaystyle b_{J}\leq OPT(I)+O(\log(OPT))} bins.

References

  1. Karmarkar, Narendra; Karp, Richard M. (November 1982). "An efficient approximation scheme for the one-dimensional bin-packing problem". 23rd Annual Symposium on Foundations of Computer Science (SFCS 1982): 312–320. doi:10.1109/SFCS.1982.61. S2CID 18583908.
  2. ^ Fernandez de la Vega, W.; Lueker, G. S. (1981). "Bin packing can be solved within 1 + ε in linear time". Combinatorica. 1 (4): 349–355. doi:10.1007/BF02579456. ISSN 1439-6912. S2CID 10519631.
  3. Claire Mathieu. "Approximation Algorithms Part I, Week 3: bin packing". Coursera.
  4. ^ Rothvoß, T. (2013-10-01). "Approximating Bin Packing within O(log OPT · Log Log OPT) Bins". 2013 IEEE 54th Annual Symposium on Foundations of Computer Science. pp. 20–29. arXiv:1301.4010. doi:10.1109/FOCS.2013.11. ISBN 978-0-7695-5135-7. S2CID 15905063.
  5. Hoberg, Rebecca; Rothvoss, Thomas (2017-01-01), "A Logarithmic Additive Integrality Gap for Bin Packing", Proceedings of the 2017 Annual ACM-SIAM Symposium on Discrete Algorithms, Proceedings, Society for Industrial and Applied Mathematics, pp. 2616–2625, arXiv:1503.08796, doi:10.1137/1.9781611974782.172, ISBN 978-1-61197-478-2, S2CID 1647463
Category: