@anema said in GTX 1070ti:
Btw What about molex to 6 pin is it okay to use it ? connect it directly to the PSU
“Molex” is a company name, not a specific pin configuration. The single row 4-pin used for old-school IDE/SCSI hard drives is a molex design, as is the 2-row 3-wide 6-pin config used for 75W GPU connectors. Either one will have heavier gauge wires, usually 18AWG or lower, as opposed to 26AWG or worse used in those SATA plugs.
Given the same amperage, the amount of heat generated in a wire drops with increased thickness of the wire. Physics 101. The lower the gauge number, the thicker the wire. The 16 or 18 gauge wires used in 4 or 6 pin molex connectors will generate much less heat than the 26 gauge wires used in some SATA connectors, which have much lower current ratings.
IF YOU KNOW WHAT YOU’RE DOING and have the proper measurement tools, you can check the current draw from your cards when fully utilized AND o/c set to max. If it’s below the current rating of your connector of choice then you’re ok to use it. Otherwise, best to err on the side of common sense and spend the few relative bucks and use the proper stuff, especially when you’re spending many hundreds of dollars per gpu.
Thanks for the welcome message and the feedback! I’ve been reading more and experimenting more.
I’m 50% to my goal of ~40MH/s or so. Its proven hard to order so many graphics cards, I have to get a volume account setup for my business - so I have been ordering them in small groups.
I currently have three Windows 10 PCs running. This took some effort to get it all running and stable, but it appears like its running well now.
My current configuration is
Windows 10 64-bit on an Asus 19-card mining expert board.
Running 11 cards 5x1070 2x1060 4x1080 hashing right about 10MH/s
Windows 10 64-bit on an Asus H270 Plus board
Running 8 cards 3x1060 1x1070 4x1080 hashing about 6.7MH/s
Windows 10 64-bit on an old HP board
running 3 cards 3x1070 with some experimental modifications … hashing at 3.3MH/s
Currently this puts me right around 20MH/s which is starting to make a good number of coins per day. I am experimenting with some of the profit switching algorithms and direct mining as well as some custom software/scripts that I am developing for mining as well.
I’m hoping I get “most” of my ROI back in about 4 months … we will see!
I didn’t use ccminer for a longer time now, but check the output of
‘ccminer - -help’
I think you can specify the id number(s) of the GPU to use.
It you define GPU0 for one coin and GPU1 for another one it should be possible
Thanks for comming back to me on that. There sould be a value for clock/mem/Watt in Afterburner. Right? Maybe you could write these here? I am just curious about the setting of the same Chip with a different Brand…
This topic is 4 years old but deserves a bump! I recently had the time to bring back one of my old mining rigs which used 280X cards…
Yes this still works and yes I was able to undervolt to 0.950 and my rig of 4x MSI 280x 3GB cards is doing 2 MH/s total. each card is ~505 KH/s
using nsgMiner 0.9.3
Thank you @ghostlander
I own a gtx 1080 and i am mining some feathercoins with it right now. But only because i bought it for gaming not for mining. I wanted to invest into feathercoin now too and figured that it wouldn’t be worth it to invest into a mining rig because i don’t want to spend too much money right now. So i just bought the feathercoins directly and already doubled my investment.
And considering the difficutly is raising quickly right now i think it was the better decision to buy the feathercoins directly. Also electricity is pretty expensive where i live.
CPU - Pentel Pentium G6420 @ 3.7GHz (according to device manager) I don’t know if this is relevant, but it lists that 4 times.
I did all of that…to the best of my ability…I may have left something enabled inadvertently. After doing all of that, I can now boot the computer fine. It hiccups a little on startup, but the 4th GPU is recognized. Everything is perfectly fine, until I start my miner. At that point, the PC freezes, and requires a manual reset. When it comes back up, the 4th GPU is no longer shown. The only way that I can get it to show back up is to turn the machine off, change PCIe slots, then reboot…same situation upon reboot.
I currently have 3 running for a little while, because the down time was maddening…