Phoenixcoin Hard Fork \#4 Discussion
-
This is a clone of a [url=https://cryptocointalk.com/topic/1404-phoenixcoin-hard-fork-4-discussion/]thread[/url] started at CryptocoinTalk. I guess it’s more productive to keep the discussion here as well :)
Here is the initial specification draft for the upcoming Phoenixcoin hard fork. The primary objective is to settle on reasonable parameters for the long run in means of performance and reliability. The current code base is a slightly tweaked fork of Litecoin v0.6.3 which needs major improvements in several areas. The most important ones are difficulty retargetting and block chain security.
Phoenixcoin has had 3 hard forks already. The 1st one was at block #46500 on the 19th of June when PXC ran into the 1st serious difficulty trap. The averaging window and time to retarget were reduced from 2400 to 600 blocks, the difficulty limiter was also decreased from 4.0 to 1.8.
The 2nd hard fork (block #69444, the 2nd of August) was infamous for taking people by surprise as it was released with a 6 hours advance notice only and played with things which were better left unchanged. First of all, the block target was halved from 1.5 minutes (90 seconds) to 45 seconds while the block reward remained at 50 PXC per block. Simply said, the coin generation rate was doubled! Such incompetence made many coin holders dump and leave. Most of the pools failed to update in time which caused losses to their miners. In fact, even most of loyal miners had switched elsewhere soon after. To make a conclusion, it was a disaster.
The 3rd hard fork (block #74100, the 30th of August) was supposed to fix the previous one. The block reward was halved to 25 PXC per block and the block target was adjusted to 45 seconds to make the coin generation rate 48K PXC per day again. The averaging window and time to retarget were reduced to 126 blocks, the difficulty limiter was also decreased to 1.09. That’s what we have currently.
I think it’s better to double both the block target to 1.5 minutes and the block reward to 50 PXC like before the 2nd hard fork. There was no need to go below 1 minute for the block target. Orphan rate increased 5x, the block chain gets inflated and it may be a problem in a few years. 1.5 minutes is fast enough as it allows for 6 confirmations in less than 10 minutes.
The second important point is the coin supply. There are ~168 million PXC advertised, but the code has been never changed to accommodate this. The current code halves the reward every 1680K blocks which results in ~84 million coins. Actually, it’s ~86 million as blocks before the 3rd hard fork came with the double sized reward of 50 PXC. It is suggested to schedule the next hard fork at block #154000 which is nearly a month of time ahead and halve the reward every 1000K blocks (~2.85 years). It results in ~98 million PXC supply (100 million theoretical minus 2 million coins due to 25 PXC block reward between #74100 and #154000).
The third thing to do is a faster retargetting with a better averaging. 9% difficulty limiter over 126 blocks is a good choice in general, but 1% over 20 blocks (~30 minutes) seems better. As for averaging, my choice is a combined average of the past 126 and 504 blocks with 0.25 damping as it seems to do really well.
What comes to the block chain protection agaisnt 51% attacks including time warps, we need to reduce the future time limit from 2 hours to 30 minutes. nMedianTimespan has to be increased from 11 to 19 blocks accordingly. I see no way how to integrate 0% PoS without breaking PoW coin distribution schedule, and PXC needs real protection now more than ever (the whole network hash rate is ~10MH/s unless Multipool switches their 1GH/s on). Have to implement ACP as an optional component, though enabled by default, until a better solution comes up.
-
all this seems ok, i have 2 comments
[quote]the third thing to do is a faster retargetting with a better averaging. 9% difficulty limiter over 126 blocks is a good choice in general, but 1% over 20 blocks (~30 minutes) seems better. As for averaging, my choice is a combined average of the past 126 and 504 blocks with 0.25 damping as it seems to do really well.[/quote]
change using longer sampling usually gives some weird effect as pre-last change is reused and is likely to make he next go the opposite way as the new state. But the 1% can possibly make this safe, I don’t have time to simulate that. but from what i see from FTC and the 2-3Gh/s switch from a pool switcher(not multipool only 1G). the switch to one side to the other is instant and can even occur back and forth depending on price 2-3 times in a 126 blocks . You can’t get switcher out you need to make sure you play with them nicely without starving yourself because of history. i wouls try to put 10M- 1G on a 0.5 diff switching in a simulation and play with 1-2 change to see the effect. the damping will probably do nothing in this case as damping would need to be an order of magnitude more , but that would make the coin too much irresponsive.[quote]What comes to the block chain protection agaisnt 51% attacks including time warps, we need to reduce the future time limit from 2 hours to 30 minutes. nMedianTimespan has to be increased from 11 to 19 blocks accordingly. [/quote]
depending on ACP depth i would add a bit more then 19 as even with 51% you are likely to get time warp with 51% without orphaning as PXC is low hashrate on normal condition.
so the 30 min in future can be used to get more then 10 blocks in future and keep it on attacker control as other block would not enter.for past i suggest you add also a max past from the any of the most recent valid block. (not just the last one)depending on ACP depth you can have to trick it a bit as to get any valid block on any chain. so orphaning block under the ACP range would not allow to place past too far. ACP at 1 makes no orphan so it would work great with only top but if 3(like FTC) you can still 51% time warp the past(would need keep a precalculate queue of block so can orphan at will as soon as they arrive). so the only way to max past time allowed is to include orphan chain.(its now decribe so now someone will try it, but FTC is possibly safe for now as long as it stay over 3Gh/s. it’s not the case of PXC)
-
I know the situation with 10MH/s + 1GH/s hash rate is not good, but have to deal with it. If the PXC price doesn’t change much, Multipool jumps in for 1 retarget cycle and mines all or almost all of 126 blocks. The coins are dumped at Cryptsy later. Maybe 2% over 20 blocks can do better, it needs more research. The idea is to make PXC less attractive to coin hoppers without giving up too much security.
[quote]depending on ACP depth i would add a bit more then 19 as even with 51% you are likely to get time warp with 51% without orphaning as PXC is low hashrate on normal condition. so the 30 min in future can be used to get more then 10 blocks in future and keep it on attacker control as other block would not enter.[/quote]
It’s possible. I would prefer to run ACP at the depth of 3 under normal conditions. It can be reduced to 1 if under a 51% attack. What do you suggest for the past and future limits?
-
I made a simulation but have no time to make more for now,
it’s a work in progress, copy of sheet1 with graph on sheet 5 are the latest test, as far as I see it 20-126-504 damping .25 seems better as 126-504 has a reapeat every 126 that seems exploitable if you throw then stop some hash at well choose point.
[url=https://docs.google.com/spreadsheet/ccc?key=0ApYFJvIJozEwdEJla3d2M1NCXy1XYXJCNUJPZVFzYVE&usp=sharing]https://docs.google.com/spreadsheet/ccc?key=0ApYFJvIJozEwdEJla3d2M1NCXy1XYXJCNUJPZVFzYVE&usp=sharing[/url] -
I have also made some simulations.
First of all, it doesn’t really matter what averaging to use with 10MH/s sustained network hash rate and 1GH/s coming from Multipool at low difficulty. Every retarget will be boundary limited. Consider 0.50 start difficulty, Multipool joins at
I have gathered Feathercoin difficulty and hash rate related data of this week and put into the simulation. Rounded 126 and 504 blocks to 100 and 500 respectively, it doesn’t seem to matter much. 500 blocks averaging doesn’t do very well. 100 blocks averaging is better, though damping makes no improvement. Combined 100 and 500 is very good even without damping. Tested with 0.25, 0.50 and 0.67 damping as well, not much difference between them, 0.50 is a little bit better. 20/100 and 20/100/500 are not bad, but worse than 100 alone.So far, retargets every 20 blocks with 2% limiter and combined 100 and 500 blocks averaging with or without 0.50 damping is my choice. Google Docs doesn’t like my formulae and I’m not going to please it, so the spread sheet is attached. LibreOffice is happy with it.
[attachment deleted by admin]
-
I have try to find a good function for hashrate but still don’t have a perfect one. the one you use converge by itself so a constant diff would converge, we know this is not true.
ahasrate like =($Z27*0.0288)-(AQ27-$Z27)/5 seems to make a not too bad formula. sorry no time o finish my version of it tonight :( -
[quote name=“groll” post=“31780” timestamp=“1382163406”]
I have try to find a good function for hashrate but still don’t have a perfect one. the one you use converge by itself so a constant diff would converge, we know this is not true.
ahasrate like =($Z27*0.0288)-(AQ27-$Z27)/5 seems to make a not too bad formula. sorry no time o finish my version of it tonight :(
[/quote]I know my hash rate estimation formula isn’t very good and may be useful for short periods of time only. Needs more randomisation. Although it shows well how various averaging models can settle at +/- 1% of block time.
-
here in attachment you will find Ghostlander simulation with a more agressive hashrate change vs diff that don’t converge. anything without damping oscillate. 100-500 or 126/504 seesm to be the good choice with .25 or possibly an even more aggressive damping
[attachment deleted by admin]
-
Groll, there is a mistake in the formula for 100 blocks averaging which calculates incorrect limiting:
=MIN(Z51*1,02;MAX(Z51*0,98;1/((V12+W12+X12+Y12+Z12)/(5*150))*Z11))
Z11 must be instead of Z51. It oscillates much stronger after the fix. Also Z23 instead of Z63 for 20/100.
20 blocks averaging with 0.1 damping shows excellent results because the base data in Z column is aligned perfectly. If we add 10% to the difficulty in Z55, it takes 10 retargets to stabilise within 1%. 0.25 damping is much faster.
I have added one 2x surge and one 0.5x dip in 10 retargets after the surge to see how our models react to such conditions. 100 and 500 blocks averaging are very bad. 100/500 is better significantly, but still not good enough. 100/500 with 0.50 or 0.67 damping oscillate too much. 100/500 with 0.25 is very good. I have added 100/500 with 0.33 to the simulation. Almost as good as 0.25, much better than 0.50. I like it.
Enuma, it’s too complicated. We need something simple to feed Calc/Excel with. Groll’s one is good.
[attachment deleted by admin]
-
you are right i have not really take care to verify too much the non damped one as they are not working well and yes the 20 is broken by the formula and i was screwed as i calculate on time and you on hashrate so moving the Z column makes some row going to a different value than others as they have different formula, representing with an expected value on Z the same reality. I played a lot on A-Y especially on U-Y to see the effects before and send a working one with FTC values. but this altered futur is good to see what happen also
with 2% retarget every 20 and 100/500 sampling more damping can also work well .1 for example is fine as it’s going to retarget often so moving a bit more slowly would work great even if large change occur. as if larger then 20% occur it goes to max and if lower then it just move 1/10 of the error each time. the error is calculate on 5 and 25 sample so a 20 blocs deviation would be include several time in the calculation. it’s the reason we see some echo of the past in the curves.
I like the 20/100/500 with .1 damping as small immediate effect of change can be seen and the overall effect of 20 if completely off is minimized by both other sampling and the damping. my second choice would be 100/500 with .1 but anything in .25 or more damping (
-
I have given it a bit of further thinking and testing with 0.1 damping. I have come to the conclusion that 100/500 0.1 is a better choice than 100/500 0.25 given 2% over 20 blocks. 20/100/500 0.1 shows a very little improvement over 100/500 0.1 which is not worth a higher time warp vulnerability. Now I’m going to write the code.
-
v0.6.5.0 is near complete. The code is at [url=https://github.com/ghostlander/Phoenixcoin]GitHub[/url] (note the new repository name). It has passed all tests on the testnet today. No binaries at this moment, but you can compile the code yourself as usual. You can use the Qt GUI of the previous release until I finish with it. You can also join the testnet with -addnode=146.185.140.182:19555 to test drive the new settings before the official release.
-
This is the summary of the 4th hard fork. Even though the PXC network has peaked almost at 200MH/s today, our difficulty adjustment algorithm works very well. You can see it following the network hash rate softly in the graph below.
[attachment deleted by admin]
-
Excellent work Ghostlander. You have done an upstanding job of turning Phoenixcoin around.
-
Very interesting work, I loved the chart. Can I resurrect my 2 phoenix coins or is this a new network?
-
[quote name=“wrapper0feather” post=“37154” timestamp=“1385639667”]
Very interesting work, I loved the chart. Can I resurrect my 2 phenix coins or is this a new network?
[/quote]The coins are the same. Install the latest client, download the block chain and you’re good to go as usual.