Misplaced Pages

Multiway number partitioning

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.

In computer science, multiway number partitioning is the problem of partitioning a multiset of numbers into a fixed number of subsets, such that the sums of the subsets are as similar as possible. It was first presented by Ronald Graham in 1969 in the context of the identical-machines scheduling problem. The problem is parametrized by a positive integer k, and called k-way number partitioning. The input to the problem is a multiset S of numbers (usually integers), whose sum is k*T.

The associated decision problem is to decide whether S can be partitioned into k subsets such that the sum of each subset is exactly T. There is also an optimization problem: find a partition of S into k subsets, such that the k sums are "as near as possible". The exact optimization objective can be defined in several ways:

  • Minimize the difference between the largest sum and the smallest sum. This objective is common in papers about multiway number partitioning, as well as papers originating from physics applications.
  • Minimize the largest sum. This objective is equivalent to one objective for Identical-machines scheduling. There are k identical processors, and each number in S represents the time required to complete a single-processor job. The goal is to partition the jobs among the processors such that the makespan (the finish time of the last job) is minimized.
  • Maximize the smallest sum. This objective corresponds to the application of fair item allocation, particularly the maximin share. It also appears in voting manipulation problems, and in sequencing of maintenance actions for modular gas turbine aircraft engines. Suppose there are some k engines, which must be kept working for as long as possible. An engine needs a certain critical part in order to operate. There is a set S of parts, each of which has a different lifetime. The goal is to assign the parts to the engines, such that the shortest engine lifetime is as large as possible.

These three objective functions are equivalent when k=2, but they are all different when k≥3.

All these problems are NP-hard, but there are various algorithms that solve it efficiently in many cases.

Some closely-related problems are:

  • The partition problem - a special case of multiway number partitioning in which the number of subsets is 2.
  • The 3-partition problem - a different and harder problem, in which the number of subsets is not considered a fixed parameter, but is determined by the input (the number of sets is the number of integers divided by 3).
  • The bin packing problem - a dual problem in which the total sum in each subset is bounded, but k is flexible; the goal is to find a partition with the smallest possible k. The optimization objectives are closely related: the optimal number of d-sized bins is at most k, iff the optimal size of a largest subset in a k-partition is at most d.
  • The uniform-machines scheduling problem - a more general problem in which different processors may have different speeds.

Approximation algorithms

There are various algorithms that obtain a guaranteed approximation of the optimal solution in polynomial time. There are different approximation algorithms for different objectives.

Minimizing the largest sum

The approximation ratio in this context is the largest sum in the solution returned by the algorithm, divided by the largest sum in the optimal solution (the ratio is larger than 1). Most algorithms below were developed for identical-machines scheduling.

  • Greedy number partitioning (also called the Largest Processing Time in the scheduling literature) loops over the numbers, and puts each number in the set whose current sum is smallest. If the numbers are not sorted, then the runtime is O ( n ) {\displaystyle O(n)} and the approximation ratio is at most 2 1 / k {\displaystyle 2-1/k} . Sorting the numbers increases the runtime to O ( n log n ) {\displaystyle O(n\log {n})} and improves the approximation ratio to 7/6 when k=2, and 4 3 1 3 k = 4 k 1 3 k {\displaystyle {\frac {4}{3}}-{\frac {1}{3k}}={\frac {4k-1}{3k}}} in general. If the numbers are distributed uniformly in , then the approximation ratio is at most 1 + O ( log log n / n ) {\displaystyle 1+O(\log {\log {n}}/n)} almost surely, and 1 + O ( 1 / n ) {\displaystyle 1+O(1/n)} in expectation.
  • Largest Differencing Method (also called the Karmarkar-Karp algorithm ) sorts the numbers in descending order and repeatedly replaces numbers by their differences. The runtime complexity is O ( n log n ) {\displaystyle O(n\log {n})} . In the worst case, its approximation ratio is similar – at most 7/6 for k =2, and at most 4 / 3 1 / 3 k {\displaystyle 4/3-1/3k} in general. However, in the average case it performs much better than the greedy algorithm: for k =2, when numbers are distributed uniformly in , its approximation ratio is at most 1 + 1 / n Θ ( log n ) {\displaystyle 1+1/n^{\Theta (\log {n})}} in expectation. It also performs better in simulation experiments.
  • The Multifit algorithm uses binary search combined with an algorithm for bin packing . In the worst case, its makespan is at most 8/7 for k =2, and at most 13/11 in general.

Several polynomial-time approximation schemes (PTAS) have been developed:

  • Graham presented the following algorithm. For any integer r>0, choose the r largest numbers in S and partition them optimally. Then allocate the remaining numbers arbitrarily. This algorithm has approximation ratio 1 + 1 1 / k 1 + r / k {\displaystyle 1+{\frac {1-1/k}{1+\lfloor r/k\rfloor }}} and it runs in time O ( 2 r n log n ) {\displaystyle O(2^{r}n\log {n})} .
  • Sahni presented a PTAS that attains (1+ε)OPT in time O ( n ( n 2 / ϵ ) k 1 ) {\displaystyle O(n\cdot (n^{2}/\epsilon )^{k-1})} . It is an FPTAS if k is fixed. For k=2, the run-time improves to O ( n 2 / ϵ ) {\displaystyle O(n^{2}/\epsilon )} . The algorithm uses a technique called interval partitioning.
  • Hochbaum and Shmoys presented the following algorithms, which work even when k is part of the input.
    • For any r >0, an algorithm with approximation ratio at most (6/5+2 ) in time O ( n ( r + log n ) ) {\displaystyle O(n(r+\log {n}))} .
    • For any r >0, an algorithm with approximation ratio at most (7/6+2 ) in time O ( n ( r k 4 + log n ) ) {\displaystyle O(n(rk^{4}+\log {n}))} .
    • For any ε>0, an algorithm with approximation ratio at most (1+ε) in time O ( ( n / ε ) ( 1 / ε 2 ) ) {\displaystyle O((n/\varepsilon )^{(1/\varepsilon ^{2})})} . This is a PTAS.

Maximizing the smallest sum

The approximation ratio in this context is the smallest sum in the solution returned by the algorithm, divided by the smallest sum in the optimal solution (the ratio is less than 1).

  • For greedy number partitioning, if the numbers are not sorted then the worst-case approximation ratio is 1/k. Sorting the numbers increases the approximation ratio to 5/6 when k=2, and 3 k 1 4 k 2 {\displaystyle {\frac {3k-1}{4k-2}}} in general, and it is tight.
  • Woeginger presented a PTAS that attains an approximation factor of 1 ε {\displaystyle 1-{\varepsilon }} in time O ( c ε n log k ) {\displaystyle O(c_{\varepsilon }n\log {k})} , where c ε {\displaystyle c_{\varepsilon }} a huge constant that is exponential in the required approximation factor ε. The algorithm uses Lenstra's algorithm for integer linear programming.
  • The FPTAS of Sahni works for this objective too.

Maximizing the sum of products

Jin studies a problem in which the goal is to maximize the sum, over every set i in 1,...,k, of the product of numbers in set i. In a more general variant, each set i may have a weight wi, and the goal is to maximize the weighted sum of products. This problem has an exact solution that runs in time O(n).

A PTAS for general objective functions

Let Ci (for i between 1 and k) be the sum of subset i in a given partition. Instead of minimizing the objective function max(Ci), one can minimize the objective function max(f(Ci)), where f is any fixed function. Similarly, one can minimize the objective function sum(f(Ci)), or maximize min(f(Ci)), or maximize sum(f(Ci)). Alon, Azar, Woeginger and Yadid presented general PTAS-s (generalizing the PTAS-s of Sanhi, Hochbaum and Shmoys, and Woeginger) for these four problems. Their algorithm works for any f which satisfies the following two conditions:

  1. A strong continuity condition called Condition F*: for every ε>0 there exists δ>0 such that, if |y-x|<δx, then |f(y)-f(x)|<εf(x).
  2. Convexity (for the minimization problems) or concavity (for the maximization problems).

The runtime of their PTAS-s is linear in n (the number of inputs), but exponential in the approximation precision. The PTAS for minimizing sum(f(Ci)) is based on some combinatorial observations:

  1. Let L := the average sum in a single subset (1/k the sum of all inputs). If some input x is at least L, then there is an optimal partition in which one part contains only x. This follows from the convexity of f. Therefore, the input can be pre-processes by assigning each such input to a unique subset. After this preprocessing, one can assume that all inputs are smaller than L.
  2. There is an optimal partition in which all subsets sums are strictly betweel L/2 and 2L (L/2 < Ci < 2L for all i in 1,...,k). Particularly, the partition minimizing the sum of squares Ci, among all optimal partitions, satisfies these inequalities.

The PTAS uses an input rounding technique. Given the input sequence S = (v1,...,vn) and a positive integer d, the rounded sequence S(d) is defined as follows:

  • For any vj > L/d, the sequence S(d) contains an input vj which is vj rounded up to the next integer multiple of L/d. Note that vjvj < vj +L/d, and L/d < vj /d, so vj < (d+1)vj /d.
  • In addition, the sequence S(d) contains some inputs equal to L/d. The number of these inputs is determined such that the sum of all these new inputs equals the sum of all inputs in S(d) that are at most L/d, rounded up to the next integer multiple of L/d (for example, if the sum of all "short" inputs in S is 51.3L/d, then 52 new L/d inputs are added to S(d)).

In S(d), all inputs are integer multiples of L/d. Moreover, the above two observations hold for S(d) too:

  1. Let L be the average sum in a single subset (1/k the sum of all inputs in S(d)). By construction, L is at least L. Since L itself is an integer multiple of L/d, the rounding-up of inputs smaller than L cannot make them larger than L. Therefore, all inputs in S(d) are smaller than L, and hence smaller than L.
  2. There is an optimal partition of S(d) in which all subset sums are strictly between L/2 and 2L. Therefore, all subsets contain at most 2d elements (since all inputs in S(d) are at least L/d).

Based on these observations, all inputs in S(d) are of the form hL/d, where h is an integer in the range ( d , d + 1 , , d 2 ) {\displaystyle (d,d+1,\ldots ,d^{2})} . Therefore, the input can be represented as an integer vector n = ( n d , n d + 1 , , n d 2 ) {\displaystyle \mathbf {n} =(n_{d},n_{d+1},\ldots ,n_{d^{2}})} , where n h {\displaystyle n_{h}} is the number of hL/d inputs in S(d). Moreover, each subset can be represented as an integer vector t = ( t d , t d + 1 , , t d 2 ) {\displaystyle \mathbf {t} =(t_{d},t_{d+1},\ldots ,t_{d^{2}})} , where x h {\displaystyle x_{h}} is the number of hL/d inputs in the subset. The subset sum is then C ( t ) = h = d d 2 t h ( h L / d 2 ) {\displaystyle C(\mathbf {t} )=\sum _{h=d}^{d^{2}}t_{h}\cdot (hL/d^{2})} . Denote by T {\displaystyle T} , the set of vectors t {\displaystyle \mathbf {t} } with L # / 2 < C ( t ) < 2 L # {\displaystyle L^{\#}/2<C(\mathbf {t} )<2L^{\#}} . Since the sum of elements in such a vector is at most 2d, the total number of these elements is smaller than ( d 2 ) 2 d = d 4 d {\displaystyle {(d^{2})}^{2d}=d^{4d}} , so | T | d 4 d {\displaystyle |T|\leq d^{4d}} .

There are two different ways to find an optimal solution to S(d). One way uses dynamic programming: its run-time is a polynomial whose exponent depends on d. The other way uses Lenstra's algorithm for integer linear programming.

Dynamic programming solution

Define V A L ( k , n ) {\displaystyle VAL(k,\mathbf {n} )} as the optimal (minimum) value of the objective function sum(f(Ci)), when the input vector is n = ( n d , n d + 1 , , n d 2 ) {\displaystyle \mathbf {n} =(n_{d},n_{d+1},\ldots ,n_{d^{2}})} and it has to be partitioned into k subsets, among all partitions in which all subset sums are strictly between L/2 and 2L.

It can be solved by the following recurrence relation:

  • V A L ( 0 , 0 ) = 0 {\displaystyle VAL(0,\mathbf {0} )=0} - since their objective sum is empty.
  • V A L ( 1 , n ) = f ( C ( n ) ) {\displaystyle VAL(1,\mathbf {n} )=f(C(\mathbf {n} ))} if L # / 2 < C ( n ) ) < 2 L # {\displaystyle L^{\#}/2<C(\mathbf {n} ))<2L^{\#}} - since all inputs must be assigned to a single subset, so its sum is C ( n ) {\displaystyle C(\mathbf {n} )} .
  • V A L ( 1 , n ) = {\displaystyle VAL(1,\mathbf {n} )=\infty } otherwise - since we do not consider optimal solutions outside this range.
  • V A L ( k , n ) = min t n , t T [ f ( C ( t ) ) + V A L ( k 1 , n t ) ] {\displaystyle VAL(k,\mathbf {n} )=\min _{\mathbf {t} \leq \mathbf {n} ,\mathbf {t} \in T}} for all k 2 {\displaystyle k\geq 2} : we check all options for the k-th subset, and combine it with an optimal partition of the remainder into k-1 subsets.

For each k and n, the recurrence relation requires to check at most | T | {\displaystyle |T|} vectors. The total number of vectors n to check is at most n d 2 {\displaystyle n^{d^{2}}} , where n is the original number of inputs. Therefore, the run-time of the dynamic programming algorithm is O ( k n d 2 d 4 d ) {\displaystyle O(k\cdot n^{d^{2}}\cdot d^{4d})} . It is linear in n for any fixed d.

Integer linear programming solution

For each vector t in T, introduce a variable xt denoting the number of subsets with this configuration. Minimizing sum(f(Ci)) can be attained by the solving the following ILP:

  • Minimize t T x t f ( C ( t ) ) {\displaystyle \sum _{\mathbf {t} \in T}x_{\mathbf {t} }\cdot f(C(\mathbf {t} ))}
  • subject to t T x t = k {\displaystyle \sum _{\mathbf {t} \in T}x_{\mathbf {t} }=k} (the total number of subsets)
  • and t T x t t = n {\displaystyle \sum _{\mathbf {t} \in T}x_{\mathbf {t} }\cdot \mathbf {t} =\mathbf {n} } (the total vector of inputs)
  • and x t 0 {\displaystyle x_{\mathbf {t} }\geq 0} .

The number of variables is at most d 4 d {\displaystyle d^{4d}} , and the number of equations is d 4 d + d 2 d + 2 {\displaystyle d^{4d}+d^{2}-d+2} - both are constants independent of n, k. Therefore, Lenstra's algorithm can be used. Its run-time is exponential in the dimension ( d 4 d {\displaystyle d^{4d}} ), but polynomial in the binary representation of the coefficients, which are in O(log(n)). Constructing the ILP itself takes time O(n).

Converting the solution from the rounded to the original instance

The following lemmas relate the partitions of the rounded instance S(d) and the original instance S.

  • For every partition of S with sums Ci, there is a partition of S(d) with sums Ci, where C i L d C i # d + 1 d C i + L d {\displaystyle C_{i}-{\frac {L}{d}}\leq C_{i}^{\#}\leq {\frac {d+1}{d}}C_{i}+{\frac {L}{d}}} .
  • For every partition of S(d) with sums Ci, there is a partition of S with sums Ci, where d d + 1 C i # 2 L d C i C i # + L d {\displaystyle {\frac {d}{d+1}}C_{i}^{\#}-2{\frac {L}{d}}\leq C_{i}\leq C_{i}^{\#}+{\frac {L}{d}}} , and it can be found in time O(n).

Given a desired approximation precision ε>0, let δ>0 be the constant corresponding to ε/3, whose existence is guaranteed by Condition F*. Let d := 5 / δ {\displaystyle d:=\lceil 5/\delta \rceil } . It is possible to show that converted partition of S has a total cost of at most ( 1 + ϵ 3 ) O P T ( 1 + ϵ ) O P T {\displaystyle (1+{\frac {\epsilon }{3}})\cdot OPT\leq (1+\epsilon )\cdot OPT} , so the approximation ratio is 1+ε.

Non-existence of PTAS for some objective functions

In contrast to the above result, if we take f(x) = 2, or f(x)=(x-1), then no PTAS for minimizing sum(f(Ci)) exists unless P=NP. Note that these f(x) are convex, but they do not satisfy Condition F* above. The proof is by reduction from partition problem.

Exact algorithms

There are exact algorithms, that always find the optimal partition. Since the problem is NP-hard, such algorithms might take exponential time in general, but may be practically usable in certain cases.

  • The pseudopolynomial time number partitioning takes O(n(k − 1)m) memory, where m is the largest number in the input. It is practical only when k=2, or when k=3 and the inputs are small integers.
  • The Complete Greedy Algorithm (CGA) considers all partitions by constructing a k-ary tree. Each level in the tree corresponds to an input number, where the root corresponds to the largest number, the level below to the next-largest number, etc. Each of the k branches corresponds to a different set in which the current number can be put. Traversing the tree in depth-first order requires only O(n) space, but might take O(k) time. The runtime can be improved by using a greedy heuristic: in each level, develop first the branch in which the current number is put in the set with the smallest sum. This algorithm finds first the solution found by greedy number partitioning, but then proceeds to look for better solutions.
  • The Complete Karmarkar-Karp algorithm (CKK) considers all partitions by constructing a tree of degree k ! {\displaystyle k!} . Each level corresponds to a pair of k-tuples, and each of the k ! {\displaystyle k!} branches corresponds to a different way of combining these k-tuples. This algorithm finds first the solution found by the largest differencing method, but then proceeds to find better solutions. For k =2 and k =3, CKK runs substantially faster than CGA on random instances. The advantage of CKK over CGA is much larger in the latter case (when an equal partition exists), and can be of several orders of magnitude. In practice, with k=2, problems of arbitrary size can be solved by CKK if the numbers have at most 12 significant digit s; with k=3, at most 6 significant digits. CKK can also run as an anytime algorithm: it finds the KK solution first, and then finds progressively better solutions as time allows (possibly requiring exponential time to reach optimality, for the worst instances). For k ≥ 4, CKK becomes much slower, and CGA performs better.
  • Korf, Schreiber and Moffitt presented hybrid algorithms, combining CKK, CGA and other methods from the subset sum problem and the bin packing problem to achieve an even better performance. Their 2018 journal paper summarizes works from several previous conference papers:
    • Recursive Number Partitioning (RNP) uses CKK for k=2, but for k>2 it recursively splits S into subsets and splits k into halves.
    • Hybrid recursive number partitioning (HRNP).
    • Improved bin completion.
    • Improved search strategies.
    • Few machines algorithm.
    • Cached iterative weakening (CIW).
    • Sequential partitioning.

Reduction to bin packing

The bin packing problem has many fast solvers. A BP solver can be used to find an optimal number partitioning. The idea is to use binary search to find the optimal makespan. To initialize the binary search, we need a lower bound and an upper bound:

  • Some lower bounds on the makespan are: (sum S)/k - the average value per subset, s1 - the largest number in S, and sk + sk+1 - the size of a bin in the optimal partition of only the largest k+1 numbers.
  • Some upper bounds can be attained by running heuristic algorithms, such as the greedy algorithm or KK.

Given a lower and an upper bound, run the BP solver with bin size middle := (lower+upper)/2.

  • If the result contains more than k bins, then the optimal makespan must be larger: set lower to middle and repeat.
  • If the result contains at most k bins, then the optimal makespan may be smaller set higher to middle and repeat.

Variants

In the balanced number partitioning problem, there are constraints on the number of items that can be allocated to each subset (these are called cardinality constraints).

Another variant is the multidimensional number partitioning.

Applications

One application of the partition problem is for manipulation of elections. Suppose there are three candidates (A, B and C). A single candidate should be elected using the veto voting rule, i.e., each voter vetoes a single candidate and the candidate with the fewest vetoes wins. If a coalition wants to ensure that C is elected, they should partition their vetoes among A and B so as to maximize the smallest number of vetoes each of them gets. If the votes are weighted, then the problem can be reduced to the partition problem, and thus it can be solved efficiently using CKK. For k=2, the same is true for any other voting rule that is based on scoring. However, for k>2 and other voting rules, some other techniques are required.

Implementations

  • Python: The prtpy package contains code for various number-partitioning and bin-packing algorithms.

References

  1. ^ Graham, Ron L. (1969-03-01). "Bounds on Multiprocessing Timing Anomalies". SIAM Journal on Applied Mathematics. 17 (2): 416–429. doi:10.1137/0117039. ISSN 0036-1399.
  2. ^ Mertens, Stephan (2006), "The Easiest Hard Problem: Number Partitioning", in Allon Percus; Gabriel Istrate; Cristopher Moore (eds.), Computational complexity and statistical physics, Oxford University Press US, p. 125, arXiv:cond-mat/0310317, Bibcode:2003cond.mat.10317M, ISBN 978-0-19-517737-4
  3. ^ Walsh, Toby (2009-07-11). "Where Are the Really Hard Manipulation Problems? The Phase Transition in Manipulating the Veto Rule" (PDF). Written at Pasadena, California, USA. Proceedings of the Twenty-First International Joint Conference on Artificial Intelligence. San Francisco, California, USA: Morgan Kaufmann Publishers Inc. pp. 324–329. Archived (PDF) from the original on 2020-07-10. Retrieved 2021-10-05.
  4. Friesen, D. K.; Deuermeyer, B. L. (1981-02-01). "Analysis of Greedy Solutions for a Replacement Part Sequencing Problem". Mathematics of Operations Research. 6 (1): 74–87. doi:10.1287/moor.6.1.74. ISSN 0364-765X.
  5. Deuermeyer, Bryan L.; Friesen, Donald K.; Langston, Michael A. (1982-06-01). "Scheduling to Maximize the Minimum Processor Finish Time in a Multiprocessor System". SIAM Journal on Algebraic and Discrete Methods. 3 (2): 190–196. doi:10.1137/0603019. ISSN 0196-5212.
  6. Korf, Richard Earl (2010-08-25). "Objective Functions for Multi-Way Number Partitioning". Third Annual Symposium on Combinatorial Search. 1: 71–72. doi:10.1609/socs.v1i1.18172. S2CID 45875088.
  7. Walter, Rico (2013-01-01). "Comparing the minimum completion times of two longest-first scheduling-heuristics". Central European Journal of Operations Research. 21 (1): 125–139. doi:10.1007/s10100-011-0217-4. ISSN 1613-9178. S2CID 17222989.
  8. Garey, Michael R.; Johnson, David S. (1979). Computers and Intractability: A Guide to the Theory of NP-Completeness. W. H. Freeman and Company. p. 238. ISBN 978-0716710448.
  9. ^ Hochbaum, Dorit S.; Shmoys, David B. (1987-01-01). "Using dual approximation algorithms for scheduling problems theoretical and practical results". Journal of the ACM. 34 (1): 144–162. doi:10.1145/7531.7535. ISSN 0004-5411. S2CID 9739129.
  10. ^ Sahni, Sartaj K. (1976-01-01). "Algorithms for Scheduling Independent Tasks". Journal of the ACM. 23 (1): 116–127. doi:10.1145/321921.321934. ISSN 0004-5411. S2CID 10956951.
  11. ^ Woeginger, Gerhard J. (1997-05-01). "A polynomial-time approximation scheme for maximizing the minimum machine completion time". Operations Research Letters. 20 (4): 149–154. doi:10.1016/S0167-6377(96)00055-7. ISSN 0167-6377.
  12. Csirik, János; Kellerer, Hans; Woeginger, Gerhard (1992-06-01). "The exact LPT-bound for maximizing the minimum completion time". Operations Research Letters. 11 (5): 281–287. doi:10.1016/0167-6377(92)90004-M. ISSN 0167-6377.
  13. Jin, Kai (2017). "Optimal Partitioning Which Maximizes the Weighted Sum of Products". In Xiao, Mingyu; Rosamond, Frances (eds.). Frontiers in Algorithmics. Lecture Notes in Computer Science. Vol. 10336. Cham: Springer International Publishing. pp. 127–138. doi:10.1007/978-3-319-59605-1_12. ISBN 978-3-319-59605-1.
  14. ^ Alon, Noga; Azar, Yossi; Woeginger, Gerhard J.; Yadid, Tal (1998). "Approximation schemes for scheduling on parallel machines". Journal of Scheduling. 1 (1): 55–66. doi:10.1002/(SICI)1099-1425(199806)1:1<55::AID-JOS2>3.0.CO;2-J. ISSN 1099-1425.
  15. ^ Korf, Richard E. (2009). Multi-Way Number Partitioning (PDF). IJCAI.
  16. Korf, Richard E. (1995-08-20). "From approximate to optimal solutions: a case study of number partitioning". Proceedings of the 14th International Joint Conference on Artificial Intelligence - Volume 1. IJCAI'95. Montreal, Quebec, Canada: Morgan Kaufmann Publishers Inc.: 266–272. ISBN 978-1-55860-363-9.
  17. Korf, Richard E. (1998-12-01). "A complete anytime algorithm for number partitioning". Artificial Intelligence. 106 (2): 181–203. doi:10.1016/S0004-3702(98)00086-1. ISSN 0004-3702.
  18. Schreiber, Ethan L.; Korf, Richard E.; Moffitt, Michael D. (2018-07-25). "Optimal Multi-Way Number Partitioning". Journal of the ACM. 65 (4): 24:1–24:61. doi:10.1145/3184400. ISSN 0004-5411. S2CID 63854223.
  19. Korf, Richard E. (2011-07-16). "A hybrid recursive multi-way number partitioning algorithm". Proceedings of the Twenty-Second International Joint Conference on Artificial Intelligence - Volume Volume One. IJCAI'11. Barcelona, Catalonia, Spain: AAAI Press: 591–596. ISBN 978-1-57735-513-7.
  20. Schreiber, Ethan L.; Korf, Richard E. (2013-08-03). "Improved bin completion for optimal bin packing and number partitioning" (PDF). Proceedings of the Twenty-Third International Joint Conference on Artificial Intelligence. IJCAI '13. Beijing, China: AAAI Press: 651–658. ISBN 978-1-57735-633-2.
  21. Moffitt, Michael D. (2013-08-03). "Search strategies for optimal multi-way number partitioning" (PDF). Proceedings of the Twenty-Third International Joint Conference on Artificial Intelligence. IJCAI '13. Beijing, China: AAAI Press: 623–629. ISBN 978-1-57735-633-2.
  22. Korf, Richard; Schreiber, Ethan (2013-06-02). "Optimally Scheduling Small Numbers of Identical Parallel Machines". Proceedings of the International Conference on Automated Planning and Scheduling. 23: 144–152. doi:10.1609/icaps.v23i1.13544. ISSN 2334-0843. S2CID 12458816.
  23. Schreiber, Ethan L.; Korf, Richard E. (2014-07-27). "Cached Iterative Weakening for Optimal Multi-Way Number Partitioning". Proceedings of the AAAI Conference on Artificial Intelligence. AAAI'14. 28. Québec City, Québec, Canada: AAAI Press: 2738–2744. doi:10.1609/aaai.v28i1.9122. S2CID 8594071.
  24. Richard E. Korf, Ethan L. Schreiber, and Michael D. Moffitt (2014). "Optimal Sequential Multi-Way Number Partitioning" (PDF).{{cite web}}: CS1 maint: multiple names: authors list (link)
  25. Schreiber, Ethan L.; Korf, Richard E. (2013-08-03). "Improved bin completion for optimal bin packing and number partitioning". Proceedings of the Twenty-Third International Joint Conference on Artificial Intelligence. IJCAI '13. Beijing, China: AAAI Press: 651–658. ISBN 978-1-57735-633-2.
  26. Pop, Petrică C.; Matei, Oliviu (2013-11-01). "A memetic algorithm approach for solving the multidimensional multi-way number partitioning problem". Applied Mathematical Modelling. 37 (22): 9191–9202. doi:10.1016/j.apm.2013.03.075. ISSN 0307-904X.
Category: