Changing the hashing algorithm
-
The problem is, that creating an ASIC is relatively cost intensive and this is one of the reasons for a high price.
The Gridseed alpha test units seemed to be quite cheap in hardware but to build a farm generating 500 -1000 kHash again becomes a rather high invest, that pays back over time. F we talk about 100 -500 MHash, the cost will be close to the already available ASICS.
This high upfront invest drives centralization, as many people can’t affort that.
I doubt, that creating our own ASICs would help here.
-
I actually like the use of Neo in the name. It’s already being associated with Bitcoin (in Cyprus).
-
Another reason not to panic into choosing the wrong solution is that Scrypt or S-Crypt was actually designed to be resistant to ASICs decryption, so intrinsically not as scalable as SHA-256.
https://www.tarsnap.com/scrypt.html
The high speed capability of the SHA256 hashing will be ameliorated by the memory speed for the Scrypt version of any ASIC device. Modern GPUs have the benefit of scale and are making significant improvements in efficacy.
The ASICs will not be as good or as quick to increase in speed for scrypt as they were for plain SHA256. The memory usage is the bottleneck.
Another point in making sure any software release has quality and is stable. We are announcing our probable intention to change, if we feel that scrypt ASICs are centralising the network.
We are certainly entering a new phase of multiple viable coin networks, where a coin will be only require enough miners to ensure the transactions are processed in a secure and consistent manner. With the improvements Feathercoin has already made it is already potentially outperform Litecoin. In particular the increased transaction speed.
As far as testing any new hashing algorithm, the main thing is to prove “how it is resistant to ASICS”, and “does it work?” , is it “the least amount of change for the greatest effect?” “How much inefficiency has it added to the system?”
-
i really like the idea of designing our own. It has 2 advantages I see, 1 if no one else shares it, it is less likely to be worthwhile to try build an asic out of it. And 2 we can build in key features specifically designed to thwart effecient asic design. And as far as making it non-asicable a few thoughts come to mind.
Make the algorithm’s mechanism for the process not static. Maybe have the rules of the logical flow be dictated by some hash. Then if someone does implement an asic, the best they could hope to accomplish is a mechanism that works like a processor and changes its flow based on meta data.
Another thought is to make the algorithm solvable in more than one way. An itterative process that degrades so that the longer you work on the problem the easier it gets to solve but the more resources it takes. I don’t off hand have such an algorithm but it would make asic circuitry reach a limit and have to reset back to the harder problem but try a new seed since it ran out of resources. a cpu based or even gpu based system however could continue to allocate off line storage or memory not available to the asics. In this case that’s ideal cause the next itteration would be easier still to solve … better than starting over. Such a “magic algorithm” would really do wonders. such a system might have 1 solution that can be reached a number of ways or another way to do it would be to have multiple solutions that are all valid like “itteration 1000, seed xx2400” works as does “itteration 2 seed aa4azzz231a”.
The final way that making asics less desirable that comes to mind is having a planned algorithm change. every 9 months the algorithm is changed. which is kind of what is going on right now.
*edit* i realize I’m talking in generalities and this is specifically about making a hash. that in turn is used by looking at leading digits hoping it has a series of 0s. something set up so we can easily adjust difficulty. just take my thoughts in that context.
-
Or once a year? Happy algo change day everyone Lol.
:) -
Yeah, hence my back peddling, it’s not the word I was looking for.
Bad joke, sorry. No offense intended.
To be honest I don’t think a name really is important is it? Considering what we’re doing, the name it self would have little impact.
I’m just saying I’m not so stressed, I thought ABC would of been a good one but the more I think about it, a name for the algo is really the last of our concerns at the moment.
Yeah, you’re probably right. The reason is I’m talking about the name is because I want to feel like I’m contributing. I don’t know enough to make recommendations of the what and how for changing the hash type.
-
The final way that making asics less desirable that comes to mind is having a planned algorithm change. every 9 months the algorithm is changed. which is kind of what is going on right now.
At first this sounds like a totally daft idea, but actually, the more I think about it the more appealing it sounds. It would however demand a mining client that can handle the change of algorithms so you don’t need to switch software every time the algo changes.
-
I like SuperScrypt too. It makes everything else seem inferior.
-
I have been wondering what effect on the block chain the algo change would make?
Is it just the mining that would be affected or would it also affect what is held in the block chain? Would there need to be any specific Code to deal with blocks before a cut off point and blocks after.
This may seem like a silly question to those in the know but if this is the case would every client of the block Chain would need this workaround?
-
I have been wondering what effect on the block chain the algo change would make?
Is it just the mining that would be affected or would it also affect what is held in the block chain? Would there need to be any specific Code to deal with blocks before a cut off point and blocks after.
This may seem like a silly question to those in the know but if this is the case would every client of the block Chain would need this workaround?
Blocks will be mined with one code before the switch and other code after. It also applies to block verification. A hash is a hash no matter what produces it as long as it’s a valid one. There is nearly a zero chance of two blocks to have the same hash even if they’re produced using the same hashing algorithm.
A switch is a hard fork and every client needs to be updated in advance preferably. The mining transition is more complicated.
-
So would it still be possible to verify every block if downloading a client for the first time?
Is there justif(block < firstChangedBlock)
{ use old algo }
else
{ use new algo }Or something similar inside the code?
Would it be possible to regenerate all the old blocks using the new algo so that we don’t need the switch over. Although that’s like rewriting history?
-
So would it still be possible to verify every block if downloading a client for the first time?
Is there justif(block < firstChangedBlock)
{ use old algo }
else
{ use new algo }Or something similar inside the code?
Something like that. I’m not sure if it’s better to switch on block number rather than date/time (time stamp).
Would it be possible to regenerate all the old blocks using the new algo so that we don’t need the switch over. Although that’s like rewriting history?
Absolutely no.
-
This may be completely daft but would a random dynamic switched algorithm be possible? I mean like something like 6 possible algorithms and the algorithm in use could be determined by an evaluation of the hash from the last block or sequence of blocks.
Put extremely simply using 2 algos
if(last block hash = odd number)
{ use algo1 }
else
{ use algo2 }Sorry if that is the most stupid suggestion yet :-[
I’d like to see the asic that could handle that lol
-
This may be completely daft but would a random dynamic switched algorithm be possible? I mean like something like 6 possible algorithms and the algorithm in use could be determined by an evaluation of the hash from the last block or sequence of blocks.
Put extremely simply using 2 algos
if(last block hash = odd number)
{ use algo1 }
else
{ use algo2 }Sorry if that is the most stupid suggestion yet :-[
I’d like to see the asic that could handle that lol
In contrary to a common belief, Quark with their X6 and Darkcoin with their X11 aren’t ASIC resistant. Although they use a number of hash functions, all of them require very little memory and can be implemented in ASIC hardware easily. If these coins rise to at least 100M USD market capitalisation, you see ASIC announcements coming.
This is the core of Scrypt:
void scrypt_1024_1_1_256_sp(const char *input, char *output, char *scratchpad) { uint8_t B[128]; uint32_t X[32]; uint32_t *V; uint32_t i, j, k; V = (uint32_t *)(((uintptr_t)(scratchpad) + 63) & ~ (uintptr_t)(63)); PBKDF2_SHA256((const uint8_t *)input, 80, (const uint8_t *)input, 80, 1, B, 128); for (k = 0; k < 32; k++) X[k] = le32dec(&B[4 * k]); for (i = 0; i < 1024; i++) { memcpy(&V[i * 32], X, 128); xor_salsa8(&X[0], &X[16]); xor_salsa8(&X[16], &X[0]); } for (i = 0; i < 1024; i++) { j = 32 * (X[16] & 1023); for (k = 0; k < 32; k++) X[k] ^= V[j + k]; xor_salsa8(&X[0], &X[16]); xor_salsa8(&X[16], &X[0]); } for (k = 0; k < 32; k++) le32enc(&B[4 * k], X[k]); PBKDF2_SHA256((const uint8_t *)input, 80, B, 128, 1, (uint8_t *)output, 32); }
PBKDF2_SHA256 is PBKDF2-HMAC-SHA-256 actually. It takes a 80-byte input and produces an output hash of length desired. It doesn’t use much memory, something like a few hundred bytes. You can replace HMAC-SHA-256 with any hash function or even a dozen chosen pseudo-randomly or even chained together and it won’t be a problem for the ASIC designers anyway. Those mixing cycles with xor_salsa8() are the real complication. We need to replace Salsa with ChaCha or something else and probably focus on 64-bit code rather than current 32-bit one. It may increase performance of x86 CPUs which are all 64-bit and reduce performance of popular GPUs which operate with 32-bit ALUs mostly, but any future ASICs will also be affected.
-
We need to replace Salsa with ChaCha or something else and probably focus on 64-bit code rather than current 32-bit one. It may increase performance of x86 CPUs which are all 64-bit and reduce performance of popular GPUs which operate with 32-bit ALUs mostly, but any future ASICs will also be affected.
I back this up. It would make all the GPUs a little slower (for everybody) and give a small advantage to the cpu mining. This is (IMHO) a win-win situation: all the GPU miners will still fight on the same ground (same ratio with just different numbers) and meanwhile you can still mine even if all you have is a bunch of CPUs. I believe that this “setup not needed” factor is really important for spreading the coin to the non geeks. If that’s also a consequence of keeping ASICs away… from my point of view it’s a no brainer.
-
Can we just go back to basics,
the change should be simple, it should prevent mining with current Litecoin Scrypt ASICs.
In order to thwart current ASICs, requires a change of hashing Algorythm.
The Sha256 hashing chip is the speedy part of the Litecoin ASIC It combines with additional memory and programming to make it able to process the Scrypt.
I think Ghostlanders idea of changing to or adding anotheSHA version is the way forward. We could test that as well, until the Blake alternate is available.
The best option is to look for a Hashing Algorithm that already has “miner support” in cgminer etc. This will greatly simplify the process for miners and coders. We should not re create the wheel unless there is a good cause.
With cAlert and fair warning to pools it should inspire more confidence that we can make necessary updates and get them out smoothly…
A lot of other coins have upgraded via hard forks as well, so I don’t think it is becoming such a big issue.
-
Keccak or SHA-3 cann’t Scrypt ASICs, Give up it.
X11 can suppert CPU only, GPU cann’t be supperted. Give up it.
Blake is better.
https://github.com/alphazero/Blake2b
Salsa20 is another better, eg Nfactor scrypt-chacha20/8(2*2^Nfactor, 1, 1)
https://github.com/dubek/salsa20-ruby
so we need to use two kinds of algorithms, yet ! ::)
-
X11 can suppert CPU only, GPU cann’t be supperted. Give up it.
This is a kind of axiom: if a GPU can do something, an ASIC can also do it. It’s a matter of time and money.
-
I agree with Ghostlander, the new ASICs are basically mini computers.
The top coin will dominate each new ASIC release for a while, then semi programmable ASICs will evolve. It is also another way the coin network can resists centralisation of hashing power.
-
So I’m still interested in PoS solutions. This is fundamentally resistant ASICs.