unless the problem space includes all possible functions f , function f must itself have a complexity of at least n to use every number from both lists , else we can ignore some elements of either of the lists , therby lowering the complexity below O(n!²)
if the problem space does include all possible functions f , I feel like it will still be faster complexity wise to find what elements the function is dependant on than to assume it depends on every element , therefore either the problem cannot be solved in O(n!²) or it can be solved quicker
By “certain distance function”, I mean a specific function that forces the problem to be O(n!^2).
But fear not! I have an idea of such function.
So the idea of such function is the hamming distance of a hash (like sha256). The hash is computed iterably by h[n] = hash(concat(h[n - 1], l[n])).
This ensures:
We can save partial results of the hashing, so we don’t need to iterate through the entire lists for each permutation. Otherwise we would get another factor n in time complexity.
The cost of computing the hamming distance is O(1).
Order matters. We can’t cheat and come up with a way to exclude some permutations.
No idea of the practical use of such algorithm. Probably completely useless.
honestly I was very suspicious that you could get away with only calling the hash function once per permutation , but I couldn’t think how to prove one way or another.
so I implemented it, first in python for prototyping then in c++ for longer runs… well only half of it, ie iterating over permutations and computing the hash, but not doing anything with it. unfortunately my implementation is O(n²) anyway, unsure if there is a way to optimize it, whatever. code
as of writing I have results for lists of n ∈ 1 … 13 (13 took 18 minutes, 12 took about 1 minute, cant be bothered to run it for longer) and the number of hashes does not follow n! as your reasoning suggests, but closer to n! ⋅ n.
anyway with your proposed function it doesn’t seem to be possible to achieve O(n!²) complexity
also dont be so negative about your own creation. you could write an entire paper about this problem imho and have a problem with your name on it. though I would rather not have to title a paper “complexity of the magic lobster party problem” so yeah
Good effort of actually implementing it. I was pretty confident my solution is correct, but I’m not as confident anymore. I will think about it for a bit more.
So in your code you do the following for each permutation:
for (int i = 0; i<n;i++) {
You’re iterating through the entire list for each permutation, which yields an O(n x n!) time complexity. My idea was an attempt to avoid that extra factor n.
I’m not sure how std implements permutations, but the way I want them is:
1 2 3 4 5
1 2 3 5 4
1 2 4 3 5
1 2 4 5 3
1 2 5 3 4
1 2 5 4 3
1 3 2 4 5
etc.
Note that the last 2 numbers change every iteration, third last number change every 2 iterations, fourth last iteration change every 2 x 3 iterations. The first number in this example change every 2 x 3 x 4 iterations.
This gives us an idea how often we need to calculate how often each hash need to be updated. We don’t need to calculate the hash for 1 2 3 between the first and second iteration for example.
The first hash will be updated 5 times. Second hash 5 x 4 times. Third 5 x 4 x 3 times. Fourth 5 x 4 x 3 x 2 times. Fifth 5 x 4 x 3 x 2 x 1 times.
So the time complexity should be the number of times we need to calculate the hash function, which is O(n + n (n - 1) + n (n - 1) (n - 2) + … + n!) = O(n!) times.
EDIT: on a second afterthought, I’m not sure this is a legal simplification. It might be the case that it’s actually O(n x n!), as there are n growing number of terms. But in that case shouldn’t all permutation algorithms be O(n x n!)?
The time complexity can be simplified as O(2.71828 x n!), which makes it O(n!), so it’s a legal simplification! (Although I thought wrong, but I arrived to the correct conclusion)
END EDIT.
We do the same for the second list (for each permission), which makes it O(n!^2).
Finally we do the hamming distance, but this is done between constant length hashes, so it’s going to be constant time O(1) in this context.
Maybe I can try my own implementation once I have access to a proper computer.
you forgot about updating the hashes of items after items which were modified , so while it could be slightly faster than O((n!×n)²) , not by much as my data shows .
in other words , every time you update the first hash you also need to update all the hashes after it , etcetera
so the complexity is O(n×n + n×(n-1)×(n-1)+…+n!×1) , though I dont know how to simplify that
I’m taking into account that when I update a hash, all the hashes to the right of it should also be updated.
Number of hashes is about 2.71828 x n! as predicted. The time seems to be proportional to n! as well (n = 12 is about 12 times slower than n = 11, which in turn is about 11 times slower than n = 10).
Interestingly this program turned out to be a fun and inefficient way of calculating the digits of e.
unless the problem space includes all possible functions f , function f must itself have a complexity of at least n to use every number from both lists , else we can ignore some elements of either of the lists , therby lowering the complexity below O(n!²)
if the problem space does include all possible functions f , I feel like it will still be faster complexity wise to find what elements the function is dependant on than to assume it depends on every element , therefore either the problem cannot be solved in O(n!²) or it can be solved quicker
By “certain distance function”, I mean a specific function that forces the problem to be O(n!^2).
But fear not! I have an idea of such function.
So the idea of such function is the hamming distance of a hash (like sha256). The hash is computed iterably by
h[n] = hash(concat(h[n - 1], l[n]))
.This ensures:
No idea of the practical use of such algorithm. Probably completely useless.
honestly I was very suspicious that you could get away with only calling the hash function once per permutation , but I couldn’t think how to prove one way or another.
so I implemented it, first in python for prototyping then in c++ for longer runs… well only half of it, ie iterating over permutations and computing the hash, but not doing anything with it. unfortunately my implementation is O(n²) anyway, unsure if there is a way to optimize it, whatever. code
as of writing I have results for lists of n ∈ 1 … 13 (13 took 18 minutes, 12 took about 1 minute, cant be bothered to run it for longer) and the number of hashes does not follow n! as your reasoning suggests, but closer to n! ⋅ n.
link for the desmos page
anyway with your proposed function it doesn’t seem to be possible to achieve O(n!²) complexity
also dont be so negative about your own creation. you could write an entire paper about this problem imho and have a problem with your name on it. though I would rather not have to title a paper “complexity of the magic lobster party problem” so yeah
Good effort of actually implementing it. I was pretty confident my solution is correct, but I’m not as confident anymore. I will think about it for a bit more.
So in your code you do the following for each permutation:
for (int i = 0; i<n;i++) {
You’re iterating through the entire list for each permutation, which yields an O(n x n!) time complexity. My idea was an attempt to avoid that extra factor n.
I’m not sure how std implements permutations, but the way I want them is:
1 2 3 4 5
1 2 3 5 4
1 2 4 3 5
1 2 4 5 3
1 2 5 3 4
1 2 5 4 3
1 3 2 4 5
etc.
Note that the last 2 numbers change every iteration, third last number change every 2 iterations, fourth last iteration change every 2 x 3 iterations. The first number in this example change every 2 x 3 x 4 iterations.
This gives us an idea how often we need to calculate how often each hash need to be updated. We don’t need to calculate the hash for 1 2 3 between the first and second iteration for example.
The first hash will be updated 5 times. Second hash 5 x 4 times. Third 5 x 4 x 3 times. Fourth 5 x 4 x 3 x 2 times. Fifth 5 x 4 x 3 x 2 x 1 times.
So the time complexity should be the number of times we need to calculate the hash function, which is O(n + n (n - 1) + n (n - 1) (n - 2) + … + n!) = O(n!) times.
EDIT: on a second afterthought, I’m not sure this is a legal simplification. It might be the case that it’s actually O(n x n!), as there are n growing number of terms. But in that case shouldn’t all permutation algorithms be O(n x n!)?
EDIT 2: found this link https://stackoverflow.com/a/39126141
The time complexity can be simplified as O(2.71828 x n!), which makes it O(n!), so it’s a legal simplification! (Although I thought wrong, but I arrived to the correct conclusion)
END EDIT.
We do the same for the second list (for each permission), which makes it O(n!^2).
Finally we do the hamming distance, but this is done between constant length hashes, so it’s going to be constant time O(1) in this context.
Maybe I can try my own implementation once I have access to a proper computer.
you forgot about updating the hashes of items after items which were modified , so while it could be slightly faster than O((n!×n)²) , not by much as my data shows .
in other words , every time you update the first hash you also need to update all the hashes after it , etcetera
so the complexity is O(n×n + n×(n-1)×(n-1)+…+n!×1) , though I dont know how to simplify that
My implementation: https://pastebin.com/3PskMZqz
Results at bottom of file.
I’m taking into account that when I update a hash, all the hashes to the right of it should also be updated.
Number of hashes is about 2.71828 x n! as predicted. The time seems to be proportional to n! as well (n = 12 is about 12 times slower than n = 11, which in turn is about 11 times slower than n = 10).
Interestingly this program turned out to be a fun and inefficient way of calculating the digits of e.
actually all of my effort was wasted since calculating the hamming distance between two lists of n hashes has a complexity of O(n) not O(1) agh
I realized this right after walking away from my devices from this to eat something :(
edit : you can calculate the hamming distance one element at a time just after rehashing that element so nevermind
Scalabe is not always quicker. Quicker is not always scalable.