Old daemon and wallet version 0.18.3
Machine Learning and A.I. predictions for 2018
genetic algorithms which reinforces itself by breeding best predictors, to make new predictors. anytime an algorithm reinforces a good idea and throws out a bad its basically learning. i only mentioned it cause of all that ai thread. i thought you all might find it interesting.
My first Industrial supervisor task was Eng D project using monti carlo methods to automatically plan work in 1990, he’s now a Professor.
My latest work is a robot song writer, which writes songs in Tab format. I’ve got about 6 months work up-dateing the “Lord of Rings” number of pages of training data, or wait for some improved frameworks to automatically format the data. It is interesting as this is done in ASCI, so you don’t have all the trouble (stopper) of processing text to numbers and normalising any more. In fact the ASCI interface (char-rnn) could be used to predict serial lists of numbers.
So really interested, using a similar approach I would have gone for. In order to make more human decisions a second layer could test and include cross references to “equations”, other serial streams or external information.
For instance the system might find just the (number derived from) daily Asci text of financial headlines from a news paper would help it make a better prediction.
Or for an equation, it is known that under normal circumstances a known variable signal can be predicted by making an average of a certain number of previous readings, depending on the signal. Whilst the system can learn to do that, it is a form of training to give the system information you think it might need.
so you are using the lord of the rings text to train the robot song writer? or the actual music from lord of the rings? either way it sounds neat
That is very impressive. Especially on analog datasets, its pretty hard to overcome quantization issues.
@wrapper ahh i see. that makes a lot more sense :)
of course now i want to see if i can make my ai write music… but of course I dont have a way to say “this is good or this bad” . hmmmmm
The updates are to classifying the parts of songs, intro, verse chorus, in a clear consistent way the neural net can learn easier (less noise). Also writing out all the verses, instead of repeats.
I’m about 25% through the latest update, but adding more Tabs would also improve the output.
I’m using a 1400 neural net and 256 character buffer i.e. 8 layers of same size. The system has learned the Tab structure, alternating lines of music and text. It starts songs with a title and composer. It nearly did a couple of rhymes …
you sound like you are well beyond this part but you might see if there are any datasets of guitar tabs or music scores at https://www.kaggle.com/datasets?sortBy=hotness&group=public&page=1&pageSize=20&size=all&filetype=all&license=all for you to digest. there are probably far better repositories of information you can use (and you probably have already found them) but i figured i should at least mention it
Yes, I joined that (kaggle) when you said about it, I’ve had a look round.
(Music) Not the sort of thing that can be common, except say, midis of classical music, or abc format which is text for, one line tunes like folk dance music. Both of which need a data alignment layer, to reduce the de-noising work.
i’m curious how did you decide the number of layers to use? i actually put layer architecture in to the genetic algorithm i wrote. (for various reasons, partly so it could mimic that structure but also so I could import such networks as a starting spot) Because neural nets work fundamentally differently then a genetic algorithms (reinforcing connections vs reinforcing whole models) more than 2 layers never gets me very far.
regardless i was curious of how one decides 8 is enough as it were
I did a year of experiments trying to increase the number of layers. The eight layers with 1400 neurons was the maxim I could get, with the extended input layer of 256. Increasing the the buffer helps as neural nets have trouble with memory beyond the buffer.
I would also have like to restrict some layers, as this helps to extract higher level relationships, but that would have meant learning lua and customizing the char-rnn code.
I haven’t done any runs for a while as working on 0.9.6.x. The last time there had been some improvements to the code and I was able to increase the neurons from max 400 to 1400.
Heres some extended layer experiments I did with evolvehtml. https://github.com/wrapperband/evolvehtml
Like now, if i watch a video about Apollo - I have be convinced the world is flat.
The A.I. running youtube is trying to convince me all the horrors of totalitarian machine learning systems, that are being used to spy on everyone, are going to be blamed and associated with “Blockchain” and a “Singularity”. False flag against Blockchain?
i for one support our new a.i. overlords… hehehe
bluebox last edited by
It’s already been the big thing for a couple years.
I can’t tell you specifics of what we’re doing with it, but we’ve had an 8x volta 80Gbit nvlink system on order for a couple months now. :drooling_face:
Imagine what 40,000 cuda cores, 5,000 tensor cores, and HBM crazy-fast memory all ganged up in one box can do these days. Distributed systems have been designed to allow several multi-gpu boxes to be connected. That’s the reason ML/DL has taken off so quickly; this kind of raw power was practically unthinkable a mere half dozen years ago, and is making models only dreamed of before workable today. But then we all knew where gpgpu was headed anyway…
I had picked up a book on neural networks twenty years ago, and wondered then just what good that would ever serve in our lifetime. Who knew.
(btw, quantum computing is at the same stage now that nn’s were back then… :))
anything related to this or predictions for 2019?
Yep, it will be very interesting to see the predictions for 2019.
I know our forums are mostly dead, however, I have been working in this field for a year and a half. I am a co-founder and the principal developer for https://livemarketanalytics.com/
I was tired of sitting in front of a trading console and watching markets day in and day out so I created an AI prediction engine and a trade bot to execute trades based on what the prediction engine was saying. The trade bot has ran for more than a year and more than doubled my starting BTC. We are planning on launching a FREE public trade bot on GitHub before 2021.
@RIPPEDDRAGON oh i dunno, a few of us still check regularly.
fun fact i did the exact same thing . the exception is i dont show people my code on git hub :) . i use the api at collective2 to do the trades.
Its been more of a hobby lately then a job as i’ve need my cash to support video game development (i need to eat ;) ) and to that end i dont actively change the code much these days. I will say, I abandoned the genetic algorithm as a way to improve picking mainly because it likes to overfit in a non dynamic system. (a lesson hard learned)
I’ve started many funds over the years and abandoned them as they turned out to be an overfit to history or just not diverse enough or i wanted a clean slate. etc.
These days I use a souped up version of gradient boosting that takes in a news feed and fundamentals (so not quite a “quant” since the newsfeed is subjective, though it is scored by an outside firm)
I actually recently found that quandl changed the api a few MONTHs ago on my quarterly data so the program had been stuck using old data (and had stopped buying some stocks all together because of it) … alas its easy to miss these things. no real loss just more time with a bad sample of data.
the results over the years have been mixed (for reasons like the ones explained). I’m always messing with it and markets this year have made less sense than ever so. eh. oddly this month has been a really good one. made up for most of the losses earlier this year. I’m actually improving the code right now (as in running a test in visual studio right now). so its amusing you posted this (as its been sitting on the side for about 10 months).
i dont know how much R&D you do, but lately i’ve been thinking real hard about how one might do a reverse t-sne. essentiall start with an end pattern and figure out the t-scores needed to get the data where you want it. if you know that then you find the weighting/spacial adjustments to most closely match that. amusingly there might be a way to build a t-sne out of that fitting to solve it in a 2nd run… but now i’m getting ahead of myself. anyway hopefully you arent just a botting out this to forums you are on and come back and read this if not :) well it was fun to type all the same. good luck on your site.
oh and here’s the site i put together last december (which probably needs to be updated to reflect new logic/reasoning i use) https://securitiesminer.com which while not a bot. allows you to see the results from the my daily analysis.
also the fundamental difference between what i did and what you did is i dont try and do anything in real time. everything is done daily. (nor do i mess with crypto as there isnt much fundamental which is my main focus)
An interesting Discussion.
@j_scheibel I have checked your site and it’s predictions come close to the real results,but for high volatile shaees, it has problems, what can be expected.
I’d say, it is a good tool, but it should be used with care for real trading decisions
t care for real trad
yeah i think you are spot on. the real problem with any analysis of the stock market is there are always things going on that the raw data doesn’t tell you. A rock solid buy can be completely undermined overnight by some unknown coming to light. or maybe just someone badmouthing it on a popular media outlet. (the opposite is true as well - bad buy being made “a winner” for a time anyway).
As much as anything, this gives me an outlet for putting my latest data mining work to good use. I mentioned t-sne, if i can solve the last piece of the puzzle with that (the first half seems to work well) i’ll really have something. outliers (as they always are) have been a problem. neural networks by there nature will, if possible, build a scenario for those that decision trees generally dont. if i can find the right way to handle that i might have something even more worth while. :)