Phoenixcoin Hard Fork \#4 Discussion
-
I know the situation with 10MH/s + 1GH/s hash rate is not good, but have to deal with it. If the PXC price doesn’t change much, Multipool jumps in for 1 retarget cycle and mines all or almost all of 126 blocks. The coins are dumped at Cryptsy later. Maybe 2% over 20 blocks can do better, it needs more research. The idea is to make PXC less attractive to coin hoppers without giving up too much security.
[quote]depending on ACP depth i would add a bit more then 19 as even with 51% you are likely to get time warp with 51% without orphaning as PXC is low hashrate on normal condition. so the 30 min in future can be used to get more then 10 blocks in future and keep it on attacker control as other block would not enter.[/quote]
It’s possible. I would prefer to run ACP at the depth of 3 under normal conditions. It can be reduced to 1 if under a 51% attack. What do you suggest for the past and future limits?
-
I made a simulation but have no time to make more for now,
it’s a work in progress, copy of sheet1 with graph on sheet 5 are the latest test, as far as I see it 20-126-504 damping .25 seems better as 126-504 has a reapeat every 126 that seems exploitable if you throw then stop some hash at well choose point.
[url=https://docs.google.com/spreadsheet/ccc?key=0ApYFJvIJozEwdEJla3d2M1NCXy1XYXJCNUJPZVFzYVE&usp=sharing]https://docs.google.com/spreadsheet/ccc?key=0ApYFJvIJozEwdEJla3d2M1NCXy1XYXJCNUJPZVFzYVE&usp=sharing[/url] -
I have also made some simulations.
First of all, it doesn’t really matter what averaging to use with 10MH/s sustained network hash rate and 1GH/s coming from Multipool at low difficulty. Every retarget will be boundary limited. Consider 0.50 start difficulty, Multipool joins at
I have gathered Feathercoin difficulty and hash rate related data of this week and put into the simulation. Rounded 126 and 504 blocks to 100 and 500 respectively, it doesn’t seem to matter much. 500 blocks averaging doesn’t do very well. 100 blocks averaging is better, though damping makes no improvement. Combined 100 and 500 is very good even without damping. Tested with 0.25, 0.50 and 0.67 damping as well, not much difference between them, 0.50 is a little bit better. 20/100 and 20/100/500 are not bad, but worse than 100 alone.So far, retargets every 20 blocks with 2% limiter and combined 100 and 500 blocks averaging with or without 0.50 damping is my choice. Google Docs doesn’t like my formulae and I’m not going to please it, so the spread sheet is attached. LibreOffice is happy with it.
[attachment deleted by admin]
-
I have try to find a good function for hashrate but still don’t have a perfect one. the one you use converge by itself so a constant diff would converge, we know this is not true.
ahasrate like =($Z27*0.0288)-(AQ27-$Z27)/5 seems to make a not too bad formula. sorry no time o finish my version of it tonight :( -
[quote name=“groll” post=“31780” timestamp=“1382163406”]
I have try to find a good function for hashrate but still don’t have a perfect one. the one you use converge by itself so a constant diff would converge, we know this is not true.
ahasrate like =($Z27*0.0288)-(AQ27-$Z27)/5 seems to make a not too bad formula. sorry no time o finish my version of it tonight :(
[/quote]I know my hash rate estimation formula isn’t very good and may be useful for short periods of time only. Needs more randomisation. Although it shows well how various averaging models can settle at +/- 1% of block time.
-
here in attachment you will find Ghostlander simulation with a more agressive hashrate change vs diff that don’t converge. anything without damping oscillate. 100-500 or 126/504 seesm to be the good choice with .25 or possibly an even more aggressive damping
[attachment deleted by admin]
-
Groll, there is a mistake in the formula for 100 blocks averaging which calculates incorrect limiting:
=MIN(Z51*1,02;MAX(Z51*0,98;1/((V12+W12+X12+Y12+Z12)/(5*150))*Z11))
Z11 must be instead of Z51. It oscillates much stronger after the fix. Also Z23 instead of Z63 for 20/100.
20 blocks averaging with 0.1 damping shows excellent results because the base data in Z column is aligned perfectly. If we add 10% to the difficulty in Z55, it takes 10 retargets to stabilise within 1%. 0.25 damping is much faster.
I have added one 2x surge and one 0.5x dip in 10 retargets after the surge to see how our models react to such conditions. 100 and 500 blocks averaging are very bad. 100/500 is better significantly, but still not good enough. 100/500 with 0.50 or 0.67 damping oscillate too much. 100/500 with 0.25 is very good. I have added 100/500 with 0.33 to the simulation. Almost as good as 0.25, much better than 0.50. I like it.
Enuma, it’s too complicated. We need something simple to feed Calc/Excel with. Groll’s one is good.
[attachment deleted by admin]
-
you are right i have not really take care to verify too much the non damped one as they are not working well and yes the 20 is broken by the formula and i was screwed as i calculate on time and you on hashrate so moving the Z column makes some row going to a different value than others as they have different formula, representing with an expected value on Z the same reality. I played a lot on A-Y especially on U-Y to see the effects before and send a working one with FTC values. but this altered futur is good to see what happen also
with 2% retarget every 20 and 100/500 sampling more damping can also work well .1 for example is fine as it’s going to retarget often so moving a bit more slowly would work great even if large change occur. as if larger then 20% occur it goes to max and if lower then it just move 1/10 of the error each time. the error is calculate on 5 and 25 sample so a 20 blocs deviation would be include several time in the calculation. it’s the reason we see some echo of the past in the curves.
I like the 20/100/500 with .1 damping as small immediate effect of change can be seen and the overall effect of 20 if completely off is minimized by both other sampling and the damping. my second choice would be 100/500 with .1 but anything in .25 or more damping (
-
I have given it a bit of further thinking and testing with 0.1 damping. I have come to the conclusion that 100/500 0.1 is a better choice than 100/500 0.25 given 2% over 20 blocks. 20/100/500 0.1 shows a very little improvement over 100/500 0.1 which is not worth a higher time warp vulnerability. Now I’m going to write the code.
-
v0.6.5.0 is near complete. The code is at [url=https://github.com/ghostlander/Phoenixcoin]GitHub[/url] (note the new repository name). It has passed all tests on the testnet today. No binaries at this moment, but you can compile the code yourself as usual. You can use the Qt GUI of the previous release until I finish with it. You can also join the testnet with -addnode=146.185.140.182:19555 to test drive the new settings before the official release.
-
This is the summary of the 4th hard fork. Even though the PXC network has peaked almost at 200MH/s today, our difficulty adjustment algorithm works very well. You can see it following the network hash rate softly in the graph below.
[attachment deleted by admin]
-
Excellent work Ghostlander. You have done an upstanding job of turning Phoenixcoin around.
-
Very interesting work, I loved the chart. Can I resurrect my 2 phoenix coins or is this a new network?
-
[quote name=“wrapper0feather” post=“37154” timestamp=“1385639667”]
Very interesting work, I loved the chart. Can I resurrect my 2 phenix coins or is this a new network?
[/quote]The coins are the same. Install the latest client, download the block chain and you’re good to go as usual.