Häufig gestellte Fragen (FAQ) FPGA Guide

Can anyone explain the bitcoin source code?

I have a friend that want to try mining bitcoin and I want to help him. I understand the operating principle of bitcoin, but how come there exists bitcoin and miners (such as cgminer)? Could anyone provide a guide to reading the source code so that I can understand it on a deeper level?
P.S. I tried to look into the make file, but I am not an experienced programmer so I could not understand much, is that the right direction?
submitted by ChunkOfAir to BitcoinBeginners [link] [comments]

Vast Monero network hash rate increase

What is up with this recent increase of the hash rate? It has almost doubled in a matter of days. Has any particular reason for this been confirmed yet?
submitted by fakoshi to Monero [link] [comments]

Monero, the Most Private Cryptocurrency

Monero, the Most Private Cryptocurrency
Written by the CoinEx Institution, this series of jocular and easy to understand articles will show you everything you need to know about major cryptocurrencies, making you fully prepared before jumping into crypto!

https://preview.redd.it/ryvcznqspe451.jpg?width=720&format=pjpg&auto=webp&s=5fa91e26288d7b0a624113ed21172cc9fd5624a3
Monero, or XMR for short, is an open-source cryptocurrency that is safe, reliable, private, and untraceable. It can run on Windows, Mac, Linux, and FreeBSD, and is known as one of the most private cryptocurrencies. In 2018, Monero already ranked 10th in terms of trading volume, with its market value beyond 1 billion US dollars, an evidence for its great fame in this field.
By a special method in cryptography, Monero ensures that all transactions remain 100% irrelevant and untraceable. Perhaps after reading this article, you will understand why it is so special and popular in the increasingly transparent and traceable cryptocurrency circle (After all privacy comes first!).
In fact, many large cryptocurrencies in the world are not anonymous. All transactions on Bitcoin and Ethereum are made public and traceable, which means that anyone can eavesdrop on transactions flowing into and out of the wallet. That has given rise to a new type of cryptocurrency called “privacy currency”! These “privacy currencies” hide encrypted transactions by adopting specific types of passwords. One typical example is Monero, one of the largest privacy cryptocurrencies in the world.
Monero was created on April 18, 2014 under the name BitMonero, literally the combination of Bit (Bitcoin) and Monero (the “coin” in Esperanto). In five days, the community decided to change its name to Monero.
Interestingly, Monero’s creators valued personal privacy and tried to behave in a low-key manner with pseudonyms instead of the real names. It is said that the Monero major contributor’s nickname is “thankful for today”, yet this guy has gradually disappeared from public view as Monero developed day by day.
Unlike many cryptocurrencies derived from BTC, Monero is based on the CryptoNote protocol. It is also the first branch based on the Bytecoin of CryptoNote currency. Here is some information about Bytecoin: BCN, for short, is a decentralized cryptocurrency with a high degree of privacy; it has open-source codes that allow everyone to contribute to the development of the Bytecoin network; and the Bytecoin network provides global users with instant private transactions that are not traceable and at no additional cost.
Yet, as a branch of BCN, Monero outshines its parent in reputation by being different in two ways. First, Monero’s target block time was reduced from 120 seconds to 60 seconds; second, the issuance speed was cut by 50% (which reverted to 120-second residence later, with the issuance time maintained and the reward for each new block doubled). By the way, during the fork, the Monero developers also found a lot of low-quality codes and then refactored them. (That is exactly what geeks will do)
Monero’s modular code structure was also highly appreciated by Wladimir J. van der Laan, one of the core maintainers of Bitcoin.
Monero values privacy, decentralization and scalability, and there are significant algorithm differences in blockchain fuzzification, which sets it apart from its peers. How private is it? Here are more details.
1. Safe and reliable
For a decentralized cryptocurrency, decentralization means that its network is operated by users; transactions are confirmed by decentralized consensus and then recorded on the blockchain irrevocably. Monero needs no third party to guarantee the safety of funds;
2. Privacy protection
Monero confuses all transaction sources, amounts, and recipients through ring signatures, ring confidential transactions, and invisible addresses. Apart from all the advantages of a decentralized cryptocurrency, it is by no means inferior in safeguarding privacy;
3. Unable to track
The sender, the receiver and the transaction amount of all Monero transactions must be anonymous by default. The information on the Monero Blockchain cannot be matched with physical individuals or specific users, so there is no trace to track;
4. Scalable
Everyone knows that Bitcoin’sability to process transactions has always been limited by the scalability issue; as we have mentioned before in the introduction of Bitcoin, the block size of 1MB makes things difficult. But Monero’s developers have created a system that allows the network to process more transactions when needed; what’s more, Monero does not have any “pre-set” restrictions on block size.
Of course, this also means that some malicious miners may block the system with large blocks. To prevent this from happening, Monero has worked out countermeasures: the block reward penalty of the system.
On October 18, 2018, Monero’s latest hard fork changed the consensus mechanism algorithm to CrypotoNight V8. In this hard fork, it introduced the BulletProff bulletproof protocol, which can also effectively reduce the transaction fee of miners without disclosing transactions
It is said that Monero will issue about 18.4 million XMR in around 8 years. Moreover, it eclipses its counterparts in distribution — with no pre-mining or pre-sale, all block rewards will be left to miners by means of the POW mechanism.
Here is the reward scheme of Monero in two stages:
  1. Acceleration: mine 18132000 XMR before May 2022;
  2. Deceleration: Deceleration starts right after 18132000 XMR are mined, and there will be a reward of 0.6XMR for each block mined afterwards. In this way, the overall supply will be kept on a small scale and decelerated.
Monero is also excellent in its development concept that is designed to be anti-ASIC from the very beginning. Here is a brief introduction to ASIC (Special Application Integrated Circuit).
Due to the specificity of ASICs, specially designed ASICs can usually have much higher hashrate than general CPUs, GPUs, and even FPGAs — that makes hashrate excessively centralized and makes it vulnerable to the monopoly of single centralized institutions. Yet the cryptonight algorithm used by Monero allows most CPUs and even FPGAs to get involved and get mining rewards, instead of making GPU the only one that can efficiently mine.
In other words, Monero’s core development team will modify the consensus mechanism algorithm and have a hard fork after some time to ensure its strength against ASIC and the monopoly of hashrate.
However, although Monero has been designed against ASICs to avoid centralization, nearly 43% of its hashrate is still owned by 3 mining pools; in addition, it is not a BTC-based currency, making it even harder to introduce some elements. Of course, Monero is not that newbie-friendly, and thus has not been widely accepted.
Yet each cryptocurrency has its own features. As long as Monero keeps improving its privacy, it will definitely attract increasing followers. If you are interested in Monero, welcome to CoinEx for exchange or trade.

About CoinEx

As a global and professional cryptocurrency exchange service provider, CoinEx was founded in December 2017 with Bitmain-led investment and has obtained a legal license in Estonia. It is a subsidiary brand of the ViaBTC Group, which owns the fifth largest BTC mining pool, which is also the largest of BCH mining, in the world.
CoinEx supports perpetual contract, spot, margin trading and other derivatives trading, and its service reaches global users in nearly 100 countries/regions with various languages available, such as Chinese, English, Korean and Russian.
Website: https://www.coinex.com/
Twitter: https://twitter.com/coinexcom
Telegram: https://t.me/CoinExOfficialENG
Click here to register on CoinEx!
submitted by CoinEx_Institution to Coinex [link] [comments]

How are FPGAs used in trading?

A field-programmable gate array (FPGA) is a chip that can be programmed to suit whatever purpose you want, as often as you want it and wherever you need it. FPGAs provide multiple advantages, including low latency, high throughput and energy efficiency.
To fully understand what FPGAs offer, imagine a performance spectrum. At one end, you have the central processing unit (CPU), which offers a generic set of instructions that can be combined to carry out an array of different tasks. This makes a CPU extremely flexible, and its behaviour can be defined through software. However, CPUs are also slow because they have to select from the available generic instructions to complete each task. In a sense, they’re a “jack of all trades, but a master of none”.
At the other end of the spectrum sit application-specific integrated circuits (ASICs). These are potentially much faster because they have been built with a single task in mind, making them a “master of one trade”. This is the kind of chip people use to mine bitcoin, for example. The downside of ASICs is that they can’t be changed, and they cost time and money to develop. FPGAs offer a perfect middle ground: they can be significantly faster than a CPU and are more flexible than ASICs.
FPGAs contain thousands, sometimes even millions, of so-called core logic blocks (CLBs). These blocks can be configured and combined to process any task that can be solved by a CPU. Compared with a CPU, FPGAs aren’t burdened by surplus hardware that would otherwise slow you down. They can therefore be used to carry out specific tasks quickly and effectively, and can even process several tasks simultaneously. These characteristics make them popular across a wide range of sectors, from aerospace to medical engineering and security systems, and of course finance.
How are FPGAs used in the financial services sector?
Speed and versatility are particularly important when buying or selling stocks and other securities. In the era of electronic trading, decisions are made in the blink of an eye. As prices change and orders come and go, companies are fed new information from exchanges and other sources via high-speed networks. This information arrives at high speeds, with time measured in nanoseconds. The sheer volume and speed of data demands a high bandwidth to process it all. Specialized trading algorithms make use of the new information in order to make trades. FPGAs provide the perfect platform to develop these applications, as they allow you to bypass non-essential software as well as generic-purpose hardware.
How do market makers use FPGAs to provide liquidity?
As a market maker, IMC provides liquidity to buyers and sellers of financial instruments. This requires us to price every instrument we trade and to react to the market accordingly. Valuation is a view on what the price of an asset should be, which is handled by our traders and our automated pricing algorithms. When a counterpart wants to buy or sell an asset on a trading venue, our role is to always be there and offer, or bid, a fair price for the asset. FPGAs enable us to perform this key function in the most efficient way possible.
At IMC, we keep a close eye on emerging technologies that can potentially improve our business. We began working with FPGAs more than a decade ago and are constantly exploring ways to develop this evolving technology. We work in a competitive industry, so our engineers have to be on their toes to make sure we’re continuously improving.
What does an FPGA engineer do?
Being an FPGA engineer is all about learning and identifying new solutions to challenges as they arise. A software developer can write code in a software language and know within seconds whether it works, and so deploy it quickly. However, the code will have to go through several abstraction layers and generic hardware components. Although you can deploy the code quickly, you do not get the fastest possible outcome.
As an FPGA engineer, it may take two to three hours of compilation time before you know whether your adjustment will result in the outcome you want. However, you can increase performance at the cost of more engineering time. The day-to-day challenge you face is how to make the process as efficient as possible with the given trade-offs while pushing the boundaries of the FPGA technology.
Skills needed to be an FPGA engineer
Things change extremely rapidly in the trading world, and agility is the name of the game. Unsurprisingly, FPGA engineers tend to enjoy a challenge. To work as an FGPA engineer at a company like IMC, you have to be a great problem-solver, a quick learner and highly adaptable.
What makes IMC a great fit for an FPGA engineer?
IMC offers a great team dynamic. We are a smaller company than many larger technology or finance houses, and we operate very much like a family unit. This means that, as a graduate engineer, you’ll never be far from the action, and you’ll be able to make an impact from day one.
Another key difference is that you’ll get to see the final outcome of your work. If you come up with an idea, we’ll give you the chance to make it work. If it does, you’ll see the results put into practice in a matter of days, which is always a great feeling. If it doesn’t, you’ll get to find out why – so there’s an opportunity to learn and improve for next time.
Ultimately, working at IMC is about having skin in the game. You’ll be entrusted with making your own decisions. And you’ll be working side by side with super smart people who are open-minded and always interested in hearing your ideas. Market making is a technology-dependent process, and we’re all in this together.
Think you have what it takes to make a difference at a technology graduate at IMC? Check out our graduate opportunities page.
submitted by IMC_Trading to u/IMC_Trading [link] [comments]

Mining ERC-918 Tokens (0xBitcoin)

GENERAL INFORMATION

0xBitcoin (0xBTC) is the first mineable ERC20 token on Ethereum. It uses mining for distribution, unlike all previous ERC20 tokens which were assigned to the contract deployer upon creation. 0xBTC is the first implementation of the EIP918 mineable token standard (https://eips.ethereum.org/EIPS/eip-918), which opened up the possibility of a whole new class of mineable assets on Ethereum. Without any ICO, airdrop, pre-mine, or founder’s reward, 0xBitcoin is arguably the most decentralized asset in the Ethereum ecosystem, including even Ether (ETH), which had a large ICO.
The goal of 0xBitcoin is to be looked at as a currency and store of value asset on Ethereum. Its 21 million token hard cap and predictable issuance give it scarcity and transparency in terms of monetary policy, both things that Ether lacks. 0xBitcoin has certain advantages over PoW based currencies, such as compatibility with smart contracts and decentralized exchanges. In addition, 0xBTC cannot be 51% attacked (without attacking Ethereum), is immune from the “death spiral”, and will receive the benefits of scaling and other improvements to the Ethereum network.

GETTING 0xBITCOIN TOKENS

0xBitcoin can be mined using typical PC hardware, traded on exchanges (either decentralized or centralized) or purchased from specific sites/contracts.

-Mined using PC hardware

-Traded on exchanges such as


MINING IN A NUTSHELL

0xBitcoin is a Smart Contract on the Ethereum network, and the concept of Token Mining is patterned after Bitcoin's distribution. Rather than solving 'blocks', work is issued by the contract, which also maintains a Difficulty which goes up or down depending on how often a Reward is issued. Miners can put their hardware to work to claim these rewards, in concert with specialized software, working either by themselves or together as a Pool. The total lifetime supply of 0xBitcoin is 21,000,000 tokens and rewards will repeatedly halve over time.
The 0xBitcoin contract was deployed by Infernal_Toast at Ethereum address: 0xb6ed7644c69416d67b522e20bc294a9a9b405b31
0xBitcoin's smart contract, running on the Ethereum network, maintains a changing "Challenge" (that is generated from the previous Ethereum block hash) and an adjusting Difficulty Target. Like traditional mining, the miners use the SoliditySHA3 algorithm to solve for a Nonce value that, when hashed alongside the current Challenge and their Minting Ethereum Address, is less-than-or-equal-to the current Difficulty Target. Once a miner finds a solution that satisfies the requirements, they can submit it into the contract (calling the Mint() function). This is most often done through a mining pool. The Ethereum address that submits a valid solution first is sent the 50 0xBTC Reward.
(In the case of Pools, valid solutions that do not satisfy the full difficulty specified by the 0xBitcoin contract, but that DO satisfy the Pool's specified Minimum Share Difficulty, get a 'share'. When one of the Miners on that Pool finds a "Full" solution, the number of shares each miner's address has submitted is used to calculate how much of the 50 0xBTC reward they will get. After a Reward is issued, the Challenge changes.
A Retarget happens every 1024 rewards. In short, the Contract tries to target an Average Reward Time of about 60 times the Ethereum block time. So (at the time of this writing):
~13.9 seconds \* 60 = 13.9 minutes
If the average Reward Time is longer than that, the difficulty will decrease. If it's shorter, it will increase. How much longer or shorter it was affects the magnitude with which the difficulty will rise/drop, to a maximum of 50%. * Click Here to visit the stats page~ (https://0x1d00ffff.github.io/0xBTC-Stats) to see recent stats and block times, feel free to ask questions about it if you need help understanding it.

MINING HARDWARE

Presently, 0xBitcoin and "Alt Tokens" can be mined on GPUs, CPUs, IGPs (on-CPU graphics) and certain FPGAs. The most recommended hardware is nVidia graphics cards for their efficiency, ubiquity and relatively low cost. As general rules, the more cores and the higher core frequency (clock) you can get, the more Tokens you will earn!
Mining on nVidia cards:
Mining on AMD cards:
Mining on IGPs (e.g. AMD Radeon and Intel HD Graphics):
Clocks and Power Levels:

MINING SOFTWARE AND DESCRIPTIONS

For the most up-to-date version info, download links, thread links and author contact information, please see this thread: https://www.reddit.com/0xbitcoin/comments/8o06dk/links_to_the_newestbest_miners_for_nvidia_amd/ Keep up to date for the latest speed, stability and feature enhancements!
COSMiC Miner by LtTofu:
SoliditySha3Miner by Amano7:
AIOMiner All-In-One GPU Miner:
TokenMiner by MVis (Mining-Visualizer):
"Nabiki"/2.10.4 by Azlehria:
~Older Miners: Older and possibly-unsupported miner versions can be found at the above link for historical purposes and specific applications- including the original NodeJS CPU miner by Infernal Toast/Zegordo, the '1000x' NodeJS/C++ hybrid version of 0xBitcoin-Miner and Mikers' enhanced CUDA builds.

FOR MORE INFORMATION...

If you have any trouble, the friendly and helpful 0xBitcoin community will be happy to help you out. Discord has kind of become 0xBTC's community hub, you can get answers the fastest from devs and helpful community members. Or message one of the community members on reddit listed below.
Links
submitted by GeoffedUP to gpumining [link] [comments]

Best $100-$300 FPGA development board in 2018?

Hello, I’ve been trying to decide on a FPGA development board, and have only been able to find posts and Reddit threads from 4-5 years ago. So I wanted to start a new thread and ask about the best “mid-range” FGPA development board in 2018. (Price range $100-$300.)
I started with this Quora answer about FPGA boards, from 2013. The Altera DE1 sounded good. Then I looked through the Terasic DE boards.
Then I found this Reddit thread from 2014, asking about the DE1-SoC vs the Cyclone V GX Starter Kit: https://www.reddit.com/FPGA/comments/1xsk6w/cyclone_v_gx_starter_kit_vs_de1soc_board/‬ (I was also leaning towards the DE1-SoC.)
Anyway, I thought I better ask here, because there are probably some new things to be aware of in 2018.
I’m completely new to FPGAs and VHDL, but I have experience with electronics/microcontrollers/programming. My goal is to start with some basic soft-core processors. I want to get some C / Rust programs compiling and running on my own CPU designs. I also want to play around with different instruction sets, and maybe start experimenting with asynchronous circuits (e.g. clock-less CPUs)
Also I don’t know if this is possible, but I’d like to experiment with ternary computing, or work with analog signals instead of purely digital logic. EDIT: I just realized that you would call those FPAAs, i.e. “analog” instead of “gate”. Would be cool if there was a dev board that also had an FPAA, but no problem if not.
EDIT 2: I also realized why "analog signals on an FPGA" doesn't make any sense, because of how LUTs work. They emulate boolean logic with a lookup table, and the table can only store 0s and 1s. So there's no way to emulate a transistor in an intermediate state. I'll just have play around with some transistors on a breadboard.
UPDATE: I've put together a table with some of the best options:
Board Maker Chip LUTs Price SoC? Features
icoBoard Lattice iCE40-HX8K 7,680 $100 Sort of A very simple FPGA development board that plugs into a Raspberry Pi, so you have a "backup" hard-core CPU that can control networking, etc. Supports a huge range of pmod accessories. You can write a program/circuit so that the Raspberry Pi CPU and the FPGA work together, similar to a SoC. Proprietary bitstream is fully reverse engineered and supported by Project IceStorm, and there is an open-source toolchain that can compile your hardware design to bitstream. Has everything you need to start experimenting with FPGAs.
iCE40-HX8K Breakout Board Lattice iCE40-HX8K-CT256 7,680 $49 No 8 LEDs, 8 switches. Very similar to icoBoard, but no Raspberry Pi or pmod accessories.
iCE40 UltraPlus Lattice iCE40 UltraPlus FPGA 5280 $99 No Chip specs. 4 switchable FPGAs, and a rechargeable battery. Bluetooth module, LCD Display (240 x 240 RGB), RGB LED, microphones, audio output, compass, pressure, gyro, accelerometer.
Go Board Lattice ICE40 HX1K FPGA 1280 $65 No 4 LEDs, 4 buttons, Dual 7-Segment LED Display, VGA, 25 MHz on-board clock, 1 Mb Flash.
snickerdoodle Xilinx Zynq 7010 28K $95 Yes Xilinx Zynq 7-Series SoC - ARM Cortex-A9 processor, and Artix-7 FPGA. 125 IO pins. 1GB DDR2 RAM. Texas Instruments WiLink 8 wireless module for 802.11n Wi-Fi and Bluetooth 4.1. No LEDs or buttons, but easy to wire up your own on a breadboard. If you want to use a baseboard, you'll need a snickerdoodle black ($195) with the pins in the "down" orientation. (E.g. The "breakyBreaky breakout board" ($49) or piSmasher SBC ($195)). The snickerdoodle one only comes with pins in the "up" orientation and doesn't support any baseboards. But you can still plug the jumpers into the pins and wire up things on a breadboard.
numato Mimas A7 Xilinx Artix 7 52K $149 No 2Gb DDR3 RAM. Gigabit Ethernet. HDMI IN/OUT. 100MHz LVDS oscillator. 80 IOs. 7-segment display, LEDs, buttons. (Found in this Reddit thread.)
Ultra96 Xilinx Zynq UltraScale+ ZU3EG 154K $249 Yes Has one of the latest Xilinx SoCs. 2 GB (512M x32) LPDDR4 Memory. Wi-Fi / Bluetooth. Mini DisplayPort. 1x USB 3.0 type Micro-B, 2x USB 3.0 Type A. Audio I/O. Four user-controllable LEDs. No buttons and limited LEDs, but easy to wire up your own on a breadboard
Nexys A7-100T Xilinx Artix 7 15,850 $265 No . 128MiB DDR2 RAM. Ethernet port, PWM audio output, accelerometer, PDM microphone, microphone, etc. 16 switches, 16 LEDs. 7 segment displays. USB HID Host for mice, keyboards and memory sticks.
Zybo Z7-10 Xilinx Zynq 7010 17,600 $199 Yes Xilinx Zynq 7000 SoC (ARM Cortex-A9, 7-series FPGA.) 1 GB DDR3 RAM. A few switches, push buttons, and LEDs. USB and Ethernet. Audio in/out ports. HDMI source + sink with CEC. 8 Total Processor I/O, 40 Total FPGA I/O. Also a faster version for $299 (Zybo Z7-20).
Arty A7 Xilinx Artix 7 15K $119 No 256MB DDR3L. 10/100 Mbps Ethernet. A few switches, buttons, LEDs.
DE10-Standard (specs) Altera Cyclone V 110K $350 Yes Dual-core Cortex-A9 processor. Lots of buttons, LEDs, and other peripherals.
DE10-Nano Altera Cyclone V 110K $130 Yes Same as DE10-Standard, but not as many peripherals, buttons, LEDs, etc.

Winner:

icoBoard ($100). (Buy it here.)
The icoBoard plugs into a Raspberry Pi, so it's similar to having a SoC. The iCE40-HX8K chip comes with 7,680 LUTs (logic elements.) This means that after you learn the basics and create some simple circuits, you'll also have enough logic elements to run the VexRiscv soft-core CPU (the lightweight Murax SoC.)
The icoBoard also supports a huge range of pluggable pmod accessories:
You can pick whatever peripherals you're interested in, and buy some more in the future.
Every FPGA vendor keeps their bitstream format secret. (Here's a Hacker News discussion about it.) The iCE40-HX8K bitstream has been fully reverse engineered by Project IceStorm, and there is an open-source set of tools that can compile Verilog to iCE40 bitstream.
This means that you have the freedom to do some crazy experiments, like:
You don't really have the same freedom to explore these things with Xilinx or Altera FPGAs. (Especially asynchronous circuits.)

Links:

Second Place:

iCE40-HX8K Breakout Board ($49)

Third Place:

numato Mimas A7 ($149).
An excellent development board with a Xilinx Artix 7 FPGA, so you can play with a bigger / faster FPGA and run a full RISC-V soft-core with all the options enabled, and a much higher clock speed. (The iCE40 FPGAs are a bit slow and small.)
Note: I've changed my mind several times as I learned new things. Here's some of my previous thoughts.

What did I buy?

I ordered a iCE40-HX8K Breakout Board to try out the IceStorm open source tooling. (I would have ordered an icoBoard if I had found it earlier.) I also bought a numato Mimas A7 so that I could experiment with the Artix 7 FPGA and Xilinx software (Vivado Design Suite.)

Questions

What can I do with an FPGA? / How many LUTs do I need?

submitted by ndbroadbent to FPGA [link] [comments]

AMA with Sinovate, a new GPU friendly coin with new innovations to the space

SINOVATE
What SINOVATE is aiming on Cryptocurrency Market?
SINOVATE is created for Innovation and it aims to keep bringing never before seen Innovations in the crypto market.
What is Infinity Nodes, why different from Classical Masternode System? Infinity Nodes are groundbreaking evolved masternodes that solves the inflation problem. Traditional masternodes start with high ROI but with very large inflation and that inflation is what inevitably makes them fail.
What is IDS, why is it better than cloud storage? And size providers how to get/ earn SIN?
IDS = Incorruptible Data Storage.
IDS is a peer-to-peer private networking system, which will permit transactions and storage between miners and Infinity Node owners. Competitors including Sia, Storj, BitTorrent and even IPFS solutions reward individuals for serving and hosting content on their hard drive space, which requires a 24/7 uptime for computers. User hard drives must remain open and the rewards received must justify the costs incurred for leaving computer online.
In IDS, the private networking of decentralized storage relies solely on the SINOVATE Blockchain, with only node owners receiving rewards as compensation for utilising their hard drive resources to run an Infinity Node. Node owners will get rewards both from the Infinity Nodes and from storing confidential data.
IDS will have 5 steps of evolution.
SINOVATE has 533 tp/s. How are you planning to use this as a use case?
Scalability is one of the biggest problems in cryptocurrencies. POS only or centralized cryptocurrencies have higher scalability but are not suitable for the original Satoshi plan. Satoshi Nakamoto’s dream was everybody to mine their own coins without being centralized so SINOVATE blockchain not only is the most scalable POW cryptocurrency but will also have much more increased scalability in the future. Mass adoption requires high scalability especially when it will be used in real life as a payment means. Are we going to see SINOVATE Payment System in the future?
SINOVATE payment gateway will be released this year with high scalability and less than 3 seconds transaction times with the help of FlashSend.
What is SINOVATE aiming with X25X Algorithm?
SINOVATE formerly SUQA always aimed at the ordinary user starting with the X22i custom algorithm and upgraded to X25X to fight the big hardware companies so everyone can mine their own coin without letting ASIC,FPGA companies dominate the network.
Algo Comparison Chart
We are committed to remaining ASIC / FPGA resistant and such use an ever evolving algorithm, the latest variation named X25X launched with the last update. It is protected from difficulty attacks using Dark Gravity Wave v3 and raises the memory requirements compared to X22i bt a factor of five making it harder for ASIC / FPGA to implement.
What is Komodo dPOW , and when is the plan implementation on SINOVATE?
dPoW diagram
KOMODO DPoW is a working and trusted 51 % Attack protection technology to prevent any kind of malicious attacks by the help of notarized data of Bitcoin, KOMODO and SINOVATE chain.
What is the current status on mobile wallets? We saw a mobile wallet trailer.
Mobile wallets will be released in July 2019 as a custom good looking wallet tailored to the specific needs of SIN Blockchain
What is the plan for adoption in real life SINOVATE?
Our team draws from a large diversity of skills from many areas of business and across many different industries. This allows us to design and hone the experience of interacting with the SINOVATE Blockchain at many levels, from developers, business leaders and operational levels, down to the end-user experience.
This allows us to develop software and user experiences from the perspective of all involved, ensuring that the end user is the primary focus.
What is the current financial status on SINOVATE?
SINOVATE are transparent about the financial status of the foundation and the activity taken with funds. We regularly publish updates and the latest one for June is here.
What partnerships will there be in the future?
Besides the Masternodes related partnerships, SINOVATE partnered with KOMODO for the integration of dPoW 51% attack protection, which will be active at the end of July or early August 2019.
As the foundation’s mission is to grow the space for all. We are happy to work with all projects and businesses both by learning from the great work others have undertaken and offering something back to other projects with our open source code.
With Governance what can it do for the community?
Decentralized governance is the future of any successful blockchain project, SINOVATE believes that blockchain will be ubiquitous in the underlying infrastructure and services in the future of everyday life. Having fair voting for developments, marketing and innovations of the SINOVATE chain will be very important for everyone.
Hopefully that covers as an introduction, please fire away below with any questions you might have for us and feel free to join sinovate for the latest news!
Edit - Thanks for the great questions and discussion. First round answered by our CEO u/cryplander, feel free to shoot more :)
submitted by nick_badlands to gpumining [link] [comments]

Ritocoin - a 100% community driven project based on Ravencoin


tl:dr: Ritocoin is a code fork of the Ravencoin codebase and continues to track future Ravencoin developments. The project was launched to provide a more community-oriented blockchain with the same functionality as Ravencoin, without a corporate overseer, and with a more flexible model for community participation and development. It’s intention is to be a hacker’s playground for innovative ideas.

Specifications

Proof-of-Work Algorithm: X21S
Block Time: 60 seconds
POW Block Reward: Smooth curve down
Community fund: 1% first year
Difficulty Retargeting: DGW-180
Maximum Supply:
6 months: 993,521,892 RITO
1 year: 1,227,448,858 RITO
5 years: 1,762,210,058 RITO
10 years: 1,820,404,381 RITO
50 years: 2,030,907,256 RITO
100 years: 2,293,707,246 RITO
Infinite: 10 RITO per block in perpetuity

Pre-mine: None
Masternodes: Researching for use case
Asset layer: Was enabled at height 50,000

Links
Website
/ritocoin
Explorer
Github
Whitepaper
twitter
[ANN]

X21S

This hashing algorithm was created specifically for Ritocoin, and was designed to resist FPGAs, ASICs, and NiceHash. It is X16S (16 algorithms shuffled and hashed),, followed by 5 additional hashing algorithms: haval256, tiger, lyra2, gost512, and sha256. The inclusion of lyra2 brings numerous advantages, making parallelization of the algorithm practically impossible, with each step relying on the previous step having already been computed. It is a “friendly” algorithm that makes GPUs produce much less heat and uses less electricity during mining.

Take your time to learn more about us in the below story of Ritocoin...

The spirit of Bitcoin continues to inspire, empower and enable people around the globe. Ten years later, just as it seemed Bitcoin was being defined by commercial agents and regulated governance, that same free and independent spirit imbued the Ravencoin community. In ten short months, however, 30% of the Ravencoin project’s net hash comes from NiceHash and the looming impact of the imminent FPGA mining cards and X16R bitstreams certainly promises to shake up the dream of this GPU miner’s darling.

Ravencoin’s fair launch genuinely inspired our developers and supporters. We admire the way Ravencoin came out swinging — fighting for fairness, an honest distribution of coins and a place where GPU miners could thrive. The asset layer attracted many more miners and investors to the pools. Many Ritocoin enthusiasts came from the Ravencoin community, and continue their association with that project.

The whole crypto ecosystem should appreciate the work begun by Ravencoin. Obviously they continue to inspire and motivate us to this day. It’s the reason we took action. We decided to start our own project which focuses upon at least two pillars of decentralized networks in the crypto space: community governance and a fair distribution of coins. It is a core belief throughout Ritocoin that in order to successfully develop and maintain this hacker’s playground — a place where a broad range of ideas could be tried and allowed to flourish — these two ideals must be allowed to drive and guide our community.

This deep focus on community choices creates a project flexible enough to support most ideas, and agile enough to define new frontiers.

A mining network’s distributed ledger is defined by its technology. Like many in the broader crypto-mining community, we value the GPU for its accessibility. These processors are available for purchase all around the world without any legal restrictions. GPUs are vastly more accessible for hobbyists and miners to acquire. They can be shipped nearly anywhere around the globe, a nice benefit to the popular secondary market which has sprung up much to the chagrin of PC gamers.

More constraints exist for the ASIC and FPGA miner. Laws in some parts of the world restrict people from using or buying ASIC and FPGA mining hardware. This alone is directly in confrontation with Ritocoin’s core values of decentralized stewardship and sovereignty.

The GPU, in essence, is like your voice. Anyone with the means of acquiring one GPU should be able to have their voice heard. ASIC and FPGA mining devalues the GPU miner’s voice and silos that coin’s network away from the small scale and personal mining operator. A truly community driven project means each stakeholder, regardless of size of contribution to the network’s net hash, has an opportunity to build, vote and direct.

If you are already familiar with our website, discord or whitepaper, you are probably aware that masternodes had been proposed as a feature of the network from the beginning. This opened the door to ongoing discussions in the Ritocoin community regarding

● A masternode’s true purpose

● What benefit they provide to the project

● How the benefit is realized

● The collateral

This discussion, governed entirely by stakeholders across the extended network yielded a defining moment for our vision of flexibility. We have not yet found the potential utility of masternodes, however, the conversation has not reached an extent to where we could abandon the idea. To quote one of our developers during this discussion on our Discord:

“Just want to give a reminder here that even though masternodes are on the roadmap, it is not set in stone. This coin belongs to the community and we will do what we as a community want to do. If we conclude that we want to take this coin a different direction than masternodes, then that is what we’ll do.” --traysi

We are all volunteers at Ritocoin. Our moderators and community leaders try to give immediate support to all users that require it. Contact us in Discord or Telegram, not only for support, but, proposing new ideas, revising old ones and just so you can find a place to get together and find people to hang out with. You are well within your rights to enjoy yourself at any given moment, and, should you feel so inclined to begin working with the team, we just so happen to be looking for ambitious individuals that see themselves as being part of a greater vision, are inspired by change, and inspired to be the change they want to see making things better in this world.

Join us in a space where your ideas to build something great can become a reality. We are eager to know what you think is best for the future of Rito. What steps would you take to become more resilient, stronger, fair and decentralized? Because at the end of the day, like it or not, love it or leave it.. this is your coin, too.

You can become a significant part of this project. We will help you further develop the role you wish to fill in the cryptocurrency space — influencer, developer, analyst, you name it. This is not a just-for-developer’s playground. We want the enthusiasts. We want the perplexed and the rabbit-hole divers. This is the coin for everyone who is trying to find their place on the path that Satoshi began unfolding in 2008 after the collapse of the housing market rippled out into the subsequent crash of global markets. That’s why we have Bitcoin, remember? Be your own bank. This is why Satoshi and Bitcoin.org kept their software open source. It’s up to us to keep the torch ablaze.

Community funds

For the first year, about 1% of mined coins are set aside into a developers fund that is used to provide bounties to the community developers who make substantial development contributions to the Ritocoin ecosystem. We have already paid out numerous bounties for important work that has already benefits Ritocoin in substantial ways. We also have another donation-driven community fund that has recently been put together for the purposes of doing fun contests and things like that.

Cooperation and collaborations

We have discovered a number of fatal flaws in the original Ravencoin codebase and worked with the Ravencoin developers to get those fixed in both Ritocoin and Ravencoin. This work has benefitted Ravencoin in numerous ways and we look forward to a long time of collaboration and cooperation between us and them. Many members of the Safecoin team are also in our discord group, and have collaborated with us in shaping the future decisions of Ritocoin. We have several thousand members in our group and they represent all walks of cryptocurrency life. We invite all coin developers, miners and enthusiasts to join our discord and be a part of this coin that truly belongs entirely to the community.

Block reward

A couple weeks ago we met for a scheduled meeting in our discord group and had a lengthy conversation about the block reward. Our block reward started at 5,000 RITO per block (every 60 seconds) just like Ravencoin. This extremely high number of coins coupled with the high profitability of mining led to unforeseen consequences with pools auto-exchanging the coin into bitcoin. This dumping by non-community miners had a very negative impact on the community sentiment and morale, as we watched the exchange price plunge. We looked at other coins and realized that this fate has befell many other coins with high block rewards. Following much discussion, we decided to change the reward structure. Starting around March 19th the block rewards will start to slowly go down in a curve until it reaches 1,000. Then the reduction will be even more slowed down with block rewards exponentially dropping at periodic intervals. We have posted charts on our website that shows what the long-term effects of our reward reducing algorithms will be. As a miner, the next 2 months will be a great time to mine and hold, while the block reward is still fairly high. We encourage all miners and cryptocurrency enthusiasts to take advantage of the current favourable block reward and build a nice holding for yourself. Then join the community and be a part of the fun we’re having with this project.
This post was prepared by a collaboration of multiple Ritocoin members and was posted to reddit by the core developer Trevali, who posts to reddit under the ritocoin username and will be very happy to answer any questions anybody may have about our project. Traysi (well known in the Ravencoin community) is also an active Ritocoin developer and may come to this thread if needed.
We welcome any questions from any of you regarding our project!
submitted by ritocoin to gpumining [link] [comments]

Potenital cons(risks) of ravencoin?

Hi. I am one RVN hodler in Korean Crypto investment community.
I understand that ravencoin has huge potential in term of becoming good assets platform, and making good profit in the future.
But, I just want to hear thoughts from you guys about potential cons(risks) of ravencoin blockchain.
Sorry for wrong English grammar. English is not my mother language :)

1) Absence of smart contract. I think smart contract is a key feature of future digital finance based on distributed ledger. But ravencoin is a Bitcoin code fork so has very limited smart contract function. White paper just mentioned about smart contract that it might be implemented on the 2nd layer. It seems that main developers' priority is developing other things mentioned in the roadmap, not smart contract.

2) Absence of Privacy. Corporates and financial institutions usually do not prefer every transaction being exposed to the public. They definitely want privacy when dealing in confidential contracts and transactions. Like smarts contract, adding privacy function seems not one of top development priority. Just calling for other developers to develop it.

3) x16r FPGA mining is on operation already and better FPGA chips are under development by hidden players. These equipments are not very available on the public yet. It means some mining whales are already exist, like Bitmain's ASIC mining in early days of Bitcoin. This phenomenon may lead to mining centralization like bitcoin blockchain in the future.

4) In the future, mining centralization may cause chain split, the hard fork. Because ravencoin is open-source public blockchain, what happpened in the bitcoin blockchain can also happen in ravencoin. As ravencoin is really focused on assets, which have financial values related, stability is very important. So things like chain split hard fork is big threat, and it can be the reason of hesitation of corporates and financial institution who are considering ravencoin blockchain for their business platform.

Please share your thoughts.
RVN Gazua!!
submitted by hopefulko to Ravencoin [link] [comments]

[PoW|Hash] SquashPoW - ASIC Resistant, Assymetric Hash

Discussions in ProgPoW, Ethash and RandomX resulted in one agreement. Memory-intensity (mainly bus-intensity) can be used to achieve or increase the resistance against ASICs, to bring back mining to the average Joe and re-distribute mining.
Meanwhile, a new algorithm called rainforest started being used in new coins such as MicroBitcoin. While the developer of said algorithm seems to be confident that their algorithm is expensive for ASICs and FPGAs to implement, issues have been found in the code, which resulted in (closed-source) GPU miners running at 1000x the original speed and FPGA-Vendors listing this algorithm as one of the coins possible to mine.
Using the research done for the rainforest algorithm, a brand new hash called "Squash" has been created. It has similar properties to rainforest, meaning that it still utilizes "expensive" functions, but also speeds very close to blake2 (5.5 to 4 cycles per byte, depending on the architecture).
To also have shared properties with Ethash and ProgPoW, a variant called SquashPoW has been designed. It uses the same interior design. This supposedly results in expensive ASICs with low potential gain and more importantly - asymmetry. Asymmetry allows developers or "coins" to force a miner to run on a relatively large scratchpad while a verifier can run on significantly less resources and therefore still inherit the ability to properly validate incoming blocks. More on that in the ethash design rationale.
Now, whats new in SquashPoW?
In case you are now interested in testing out SquashPoW, I highly recommend checkout out the source code which can be seen at the official GitHub Repository.
Please note, SquashPoW is merely a variation of the concepts of Ethash. If you enjoy this hash, please show the original some love.
Please also note, that this is merely a post to spread awareness.
EDIT: A reference implementation can be found here
submitted by Luke_ClashProject to CryptoNoteTech [link] [comments]

[Dev] The devs are doing very dull, safe things

I've got 15 minutes spare at lunch, so this is going to be very quick and quite rough:
PoW change: We're intending to stick with Scrypt mining through to the 600k block at least, because we want miners to have confidence in investing in hardware. No plans past then, but it's more negotiable at least. Really not moving to X11; it's eleven random algorithms glued together, 5 of them with ASIC implementations already, and all 11 have FPGA implementations, it would be a highly costly move that is likely to give us much worse problems than those we face now, when X11 ASICs hit.
Merged mining: There was an informal community poll, it went poorly. Needs p2pool as a pre-requisite, or all that happens is we add in functionality no-one actually uses. It's been pointed out that p2pool only really makes sense for significant miners (low powered miners are unlikely to get payouts), but for those who can use p2pool, please consider doing so.
PoS: Still under consideration long term, but we'd much rather see the price stabilise high enough that we can sustain PoW, than incur the risk of moving. Also security concerns stemming from PoS meaning that coins have to be kept in hot wallets (as opposed to in paper wallets or similar. I'd hope it's clear why we have security concerns in light of issues such as DogeVault (no, I haven't heard anything more in weeks either).
PoSV/PoT: Keeping an eye on them
Generally, the developers are focusing on integrating Bitcoin client improvements, and we're now taking a clear lead on this compared to other altcoins. You can see this clearly reflected in the source code metrics on Coin Gecko, where did I mention we're the 2nd highest coin. Reference client 1.7.2 is progressing nicely and will be another non-required update. We have a number of developers working actively on the code, and an in-depth cross-checking process to ensure the results are of the high quality and stability you would expect from software dealing with $31mil of digital assets. We're working on making a rock solid platform for a currency, not a get rich quick scheme, and I hope you can appreciate this takes time.
Also Twitch is coming.
submitted by rnicoll to dogecoin [link] [comments]

The Problem with PoW

The Problem with PoW
Miners have always had it rough..
"Frustrated Miners"

The Problem with PoW
(and what is being done to solve it)

Proof of Work (PoW) is one of the most commonly used consensus mechanisms entrusted to secure and validate many of today’s most successful cryptocurrencies, Bitcoin being one. Battle-hardened and having weathered the test of time, Bitcoin has demonstrated the undeniable strength and reliability of the PoW consensus model through sheer market saturation, and of course, its persistency.
In addition to the cost of powerful computing hardware, miners prove that they are benefiting the network by expending energy in the form of electricity, by solving and hashing away complex math problems on their computers, utilizing any suitable tools that they have at their disposal. The mathematics involved in securing proof of work revolve around unique algorithms, each with their own benefits and vulnerabilities, and can require different software/hardware to mine depending on the coin.
Because each block has a unique and entirely random hash, or “puzzle” to solve, the “work” has to be performed for each block individually and the difficulty of the problem can be increased as the speed at which blocks are solved increases.

Hashrates and Hardware Types

While proof of work is an effective means of securing a blockchain, it inherently promotes competition amongst miners seeking higher and higher hashrates due to the rewards earned by the node who wins the right to add the next block. In turn, these higher hash rates benefit the blockchain, providing better security when it’s a result of a well distributed/decentralized network of miners.
When Bitcoin first launched its genesis block, it was mined exclusively by CPUs. Over the years, various programmers and developers have devised newer, faster, and more energy efficient ways to generate higher hashrates; some by perfecting the software end of things, and others, when the incentives are great enough, create expensive specialized hardware such as ASICs (application-specific integrated circuit). With the express purpose of extracting every last bit of hashing power, efficiency being paramount, ASICs are stripped down, bare minimum, hardware representations of a specific coin’s algorithm.
This gives ASICS a massive advantage in terms of raw hashing power and also in terms of energy consumption against CPUs/GPUs, but with significant drawbacks of being very expensive to design/manufacture, translating to a high economic barrier for the casual miner. Due to the fact that they are virtual hardware representations of a single targeted algorithm, this means that if a project decides to fork and change algorithms suddenly, your powerful brand-new ASIC becomes a very expensive paperweight. The high costs in developing and manufacturing ASICs and the associated risks involved, make them unfit for mass adoption at this time.
Somewhere on the high end, in the vast hashrate expanse created between GPU and ASIC, sits the FPGA (field programmable gate array). FPGAs are basically ASICs that make some compromises with efficiency in order to have more flexibility, namely they are reprogrammable and often used in the “field” to test an algorithm before implementing it in an ASIC. As a precursor to the ASIC, FPGAs are somewhat similar to GPUs in their flexibility, but require advanced programming skills and, like ASICs, are expensive and still fairly uncommon.

2 Guys 1 ASIC

One of the issues with proof of work incentivizing the pursuit of higher hashrates is in how the network calculates block reward coinbase payouts and rewards miners based on the work that they have submitted. If a coin generated, say a block a minute, and this is a constant, then what happens if more miners jump on a network and do more work? The network cannot pay out more than 1 block reward per 1 minute, and so a difficulty mechanism is used to maintain balance. The difficulty will scale up and down in response to the overall nethash, so if many miners join the network, or extremely high hashing devices such as ASICs or FPGAs jump on, the network will respond accordingly, using the difficulty mechanism to make the problems harder, effectively giving an edge to hardware that can solve them faster, balancing the network. This not only maintains the block a minute reward but it has the added side-effect of energy requirements that scale up with network adoption.
Imagine, for example, if one miner gets on a network all alone with a CPU doing 50 MH/s and is getting all 100 coins that can possibly be paid out in a day. Then, if another miner jumps on the network with the same CPU, each miner would receive 50 coins in a day instead of 100 since they are splitting the required work evenly, despite the fact that the net electrical output has doubled along with the work. Electricity costs miner’s money and is a factor in driving up coin price along with adoption, and since more people are now mining, the coin is less centralized. Now let’s say a large corporation has found it profitable to manufacture an ASIC for this coin, knowing they will make their money back mining it or selling the units to professionals. They join the network doing 900 MH/s and will be pulling in 90 coins a day, while the two guys with their CPUs each get 5 now. Those two guys aren’t very happy, but the corporation is. Not only does this negatively affect the miners, it compromises the security of the entire network by centralizing the coin supply and hashrate, opening the doors to double spends and 51% attacks from potential malicious actors. Uncertainty of motives and questionable validity in a distributed ledger do not mix.
When technology advances in a field, it is usually applauded and welcomed with open arms, but in the world of crypto things can work quite differently. One of the glaring flaws in the current model and the advent of specialized hardware is that it’s never ending. Suppose the two men from the rather extreme example above took out a loan to get themselves that ASIC they heard about that can get them 90 coins a day? When they join the other ASIC on the network, the difficulty adjusts to keep daily payouts consistent at 100, and they will each receive only 33 coins instead of 90 since the reward is now being split three ways. Now what happens if a better ASIC is released by that corporation? Hopefully, those two guys were able to pay off their loans and sell their old ASICs before they became obsolete.
This system, as it stands now, only perpetuates a never ending hashrate arms race in which the weapons of choice are usually a combination of efficiency, economics, profitability and in some cases control.

Implications of Centralization

This brings us to another big concern with expensive specialized hardware: the risk of centralization. Because they are so expensive and inaccessible to the casual miner, ASICs and FPGAs predominantly remain limited to a select few. Centralization occurs when one small group or a single entity controls the vast majority hash power and, as a result, coin supply and is able to exert its influence to manipulate the market or in some cases, the network itself (usually the case of dishonest nodes or bad actors).
This is entirely antithetical of what cryptocurrency was born of, and since its inception many concerted efforts have been made to avoid centralization at all costs. An entity in control of a centralized coin would have the power to manipulate the price, and having a centralized hashrate would enable them to affect network usability, reliability, and even perform double spends leading to the demise of a coin, among other things.
The world of crypto is a strange new place, with rapidly growing advancements across many fields, economies, and boarders, leaving plenty of room for improvement; while it may feel like a never-ending game of catch up, there are many talented developers and programmers working around the clock to bring us all more sustainable solutions.

The Rise of FPGAs

With the recent implementation of the commonly used coding language C++, and due to their overall flexibility, FPGAs are becoming somewhat more common, especially in larger farms and in industrial setting; but they still remain primarily out of the hands of most mining enthusiasts and almost unheard of to the average hobby miner. Things appear to be changing though, one example of which I’ll discuss below, and it is thought by some, that soon we will see a day when mining with a CPU or GPU just won’t cut it any longer, and the market will be dominated by FPGAs and specialized ASICs, bringing with them efficiency gains for proof of work, while also carelessly leading us all towards the next round of spending.
A perfect real-world example of the effect specialized hardware has had on the crypto-community was recently discovered involving a fairly new project called VerusCoin and a fairly new, relatively more economically accessible FPGA. The FPGA is designed to target specific alt-coins whose algo’s do not require RAM overhead. It was discovered the company had released a new algorithm, kept secret from the public, which could effectively mine Verus at 20x the speed of GPUs, which were the next fastest hardware types mining on the Verus network.
Unfortunately this was done with a deliberately secret approach, calling the Verus algorithm “Algo1” and encouraging owners of the FPGA to never speak of the algorithm in public channels, admonishing a user when they did let the cat out of the bag. The problem with this business model is that it is parasitic in nature. In an ecosystem where advancements can benefit the entire crypto community, this sort of secret mining approach also does not support the philosophies set forth by the Bitcoin or subsequent open source and decentralization movements.
Although this was not done in the spirit of open source, it does hint to an important step in hardware innovation where we could see more efficient specialized systems within reach of the casual miner. The FPGA requires unique sets of data called a bitstream in order to be able to recognize each individual coin’s algorithm and mine them. Because it’s reprogrammable, with the support of a strong development team creating such bitstreams, the miner doesn’t end up with a brick if an algorithm changes.

All is not lost thanks to.. um.. Technology?

Shortly after discovering FPGAs on the network, the Verus developers quickly designed, tested, and implemented a new, much more complex and improved algorithm via a fork that enabled Verus to transition smoothly from VerusHash 1.0 to VerusHash 2.0 at block 310,000. Since the fork, VerusHash 2.0 has demonstrated doing exactly what it was designed for- equalizing hardware performance relative to the device being used while enabling CPUs (the most widely available “ASICs”) to mine side by side with GPUs, at a profit and it appears this will also apply to other specialized hardware. This is something no other project has been able to do until now. Rather than pursue the folly of so many other projects before it- attempting to be “ASIC proof”, Verus effectively achieved and presents to the world an entirely new model of “hardware homogeny”. As the late, great, Bruce Lee once said- “Don’t get set into one form, adapt it and build your own, and let it grow, be like water.”
In the design of VerusHash 2.0, Verus has shown it doesn’t resist progress like so many other new algorithms try to do, it embraces change and adapts to it in the way that water becomes whatever vessel it inhabits. This new approach- an industry first- could very well become an industry standard and in doing so, would usher in a new age for proof of work based coins. VerusHash 2.0 has the potential to correct the single largest design flaw in the proof of work consensus mechanism- the ever expanding monetary and energy requirements that have plagued PoW based projects since the inception of the consensus mechanism. Verus also solves another major issue of coin and net hash centralization by enabling legitimate CPU mining, offering greater coin and hashrate distribution.
Digging a bit deeper it turns out the Verus development team are no rookies. The lead developer Michael F Toutonghi has spent decades in the field programming and is a former Vice President and Technical Fellow at Microsoft, recognized founder and architect of Microsoft's .Net platform, ex-Technical Fellow of Microsoft's advertising platform, ex-CTO, Parallels Corporation, and an experienced distributed computing and machine learning architect. The project he helped create employs and makes use of a diverse myriad of technologies and security features to form one of the most advanced and secure cryptocurrency to date. A brief description of what makes VerusCoin special quoted from a community member-
"Verus has a unique and new consensus algorithm called Proof of Power which is a 50% PoW/50% PoS algorithm that solves theoretical weaknesses in other PoS systems (Nothing at Stake problem for example) and is provably immune to 51% hash attacks. With this, Verus uses the new hash algorithm, VerusHash 2.0. VerusHash 2.0 is designed to better equalize mining across all hardware platforms, while favoring the latest CPUs over older types, which is also one defense against the centralizing potential of botnets. Unlike past efforts to equalize hardware hash-rates across different hardware types, VerusHash 2.0 explicitly enables CPUs to gain even more power relative to GPUs and FPGAs, enabling the most decentralizing hardware, CPUs (due to their virtually complete market penetration), to stay relevant as miners for the indefinite future. As for anonymity, Verus is not a "forced private", allowing for both transparent and shielded (private) transactions...and private messages as well"

If other projects can learn from this and adopt a similar approach or continue to innovate with new ideas, it could mean an end to all the doom and gloom predictions that CPU and GPU mining are dead, offering a much needed reprieve and an alternative to miners who have been faced with the difficult decision of either pulling the plug and shutting down shop or breaking down their rigs to sell off parts and buy new, more expensive hardware…and in so doing present an overall unprecedented level of decentralization not yet seen in cryptocurrency.
Technological advancements led us to the world of secure digital currencies and the progress being made with hardware efficiencies is indisputably beneficial to us all. ASICs and FPGAs aren’t inherently bad, and there are ways in which they could be made more affordable and available for mass distribution. More than anything, it is important that we work together as communities to find solutions that can benefit us all for the long term.

In an ever changing world where it may be easy to lose sight of the real accomplishments that brought us to this point one thing is certain, cryptocurrency is here to stay and the projects that are doing something to solve the current problems in the proof of work consensus mechanism will be the ones that lead us toward our collective vision of a better world- not just for the world of crypto but for each and every one of us.
submitted by Godballz to CryptoCurrency [link] [comments]

[Very long, very serious] Development summary week ending 18th April 2014

When I got my first full time job, I used to try implementing requests from everyone as they came in, and for a while people really loved that I listened to their requests. Over time, however, things started to go wrong. I’d apply a change someone asked for, and in doing so would break something elsewhere in the code, in some subtle way that was missed in short-term testing. I’d fix that second bug and reveal a third. I’d fix that just in time for a new request to come in, and the process repeat. This led to the term “Bug whack-a-mole”, wherein I was spending time mostly fixing bugs introduced to live systems through rushing through earlier bug fixes.
So this week, we’ve had a lot of people asking about changes to proof-of-work, especially X11, or even moving to proof of stake, primarily in an attempt to address risk of a 51% attack. A 51% attack is where one actor (person, group, organisation, whatever) gains control of enough resources to be able to create their own blockchain, isolated from the main blockchain, at a rate at least as quickly as the main blockchain is being created. They can then spend Dogecoins on the main blockchain, before releasing their fake blockchain; if their fake blockchain is longer than the existing blockchain, nodes will switch to the new blockchain (as they would when repairing a fork), and essentially the spent Dogecoin on the main blockchain are reversed and can be spent again. This is mostly of consequence to exchanges and payment processors (such as Moolah), who are most likely to end up holding the loss from the double-spend.
The concern about a 51% attack stems from a couple of weeks ago now, when Wafflepool was around 50% of the network hashrate (mining power). It’s still high (at the time of wring about 32GH/s out of almost 74GH/s, or about 43%), but it is diminishing as a proportion.
Lets talk about proof of stake first, as this one’s simpler. Proof of stake has been suggested as a way of avoiding the risk of Wafflepool having control of too many mining resources by itself, by changing from securing the blockchain through computational resources (work), to using number of Dogecoin held. The theory is that those with most Dogecoins have most to lose, and will act in their own interests. Major examples of proof of stake coins include Peercoin, Mintcoin and more recently Blackcoin.
However, this essentially means we take control from Wafflepool, and hand it to Cryptsy (who are considered most likely to be the holder of some of the huge Dogecoin wallets out there). I by no means expect either organisation to attempt a 51% attack, but hopefully it’s clear that simply switching risks isn’t actually improving things. I’ve also had significant concerns raised from the merchant/payment processor community about potential impact of proof of stake, and that it may encourage hoarding (as coins are awarded for holding coins, rather than for mining). The price instability of Mintcoin and Blackcoin (and that Peercoin appears to only avoid this through very high transaction fees to keep the entire network inert) does not encourage confidence, either. For now, proof of stake remains something we’re keeping in mind, primarily in case price does not react as anticipated to mining reward decreases over time, but certainly we’re not eager to rush into such a change.
Before I get into a discussion on proof of work, let me summarise this quickly; right now, uncertainty about changes is holding back our community from adopting ASICs. It’s high risk to spend hundreds, thousands or in some cases significantly more on ASIC hardware which could be left useless if we move. Those who have already purchased ASICs to support the Dogecoin hashrate would most likely have to mine Litecoin to recover sunk costs, if we did move. ASICs are virtually inevitable, and in our assessment we are better off pushing for rapid adoption, rather than expending resources delaying a problem which will re-occur later.
At the time of writing the development team has no plans to change proof of work algorithm outside of the eventuality of a major security break to Scrypt. We are focusing on mitigation approaches in case of a 51% attack, and adoption of the coin as the most sustainable approaches to dealing with this risk.
The X11 algorithm has been proposed as an alternative proof of work algorithm. X11, for those unaware, was introduced with Darkcoin. It’s a combination of 11 different SHA-3 candidate algorithms, using multiple rounds of hashing. The main advantage championed for Darkcoin is that current implementations run cooler on GPU hardware. Beyond that, there’s a lot of confusion over what it does and does not do. As I’m neither an algorithms or electronics specialist, I recruited a colleague who previously worked on the CERN computing grid to assist, and the following is primarily his analysis. A full technical report is coming for anyone who really likes detail, this is just a summary:
A lot of people presume X11 is ASIC resistant; it’s not. Candidate algorithms for SHA-3 were assessed on a number of criteria, including simplicity to implement in hardware. All 11 algorithms have been implemented in FPGA hardware, and several in ASIC hardware already. The use of multiple algorithms does significantly complicate ASIC development, as it means the resulting chip would likely be extremely large. This has consequences for production, as the area of a chip is the main determining factor for likelihood of an error in the chip.
The short version being that while yes it would take significant resources to make an efficient ASIC for X11, for a long time Scrypt was considered infeasible to adapt to ASICs. As stated earlier, any move would most likely be nothing more than an extremely expensive and risky delaying manoeuvre. ASIC efficiency would also depend heavily on ability to optimise the combination of the algorithms; a naive implementation would run at around the rate of the slowest hashing algorithm, however if any common elements could be found amongst the algorithms, it may be that this could be improved upon significantly
There are also significant areas of concern with regards to X11. The “thermal efficiency” is most likely a result of the algorithm being a poor fit for GPU hardware. This means that GPU mining is closer to CPU mining (the X11 Wiki article suggests a ratio of 3:1 for GPU/CPU mining performance), however it also means that if a way of was found to improve performance there could be significantly faster software miners, leading to an ASIC-like edge without any of the hardware development costs. The component algorithms are all relatively new, and several were rejected during the SHA-3 competition for security concerns (see http://csrc.nist.gov/groups/ST/hash/sha-3/Round2/documents/Round2_Report_NISTIR_7764.pdf for full details). Security criteria for SHA-3 algorithms was also focused on ability to generate collisions, rather than on producing hashes with specific criteria (such as number of leading 0s, which is how proof of work is usually assessed).
X11 is a fascinating algorithm for new coins, however I would consider it exceptionally high risk for any existing coin to adopt.
Beyond algorithm analysis, this week has been mostly about testing 1.7. Last weekend Patrick raised the issue that we had been incorrectly running the automated tests, which had led to several automated test failures being missed earlier. This led to other tasks being dropped as we quickly reworked the tests to match Dogecoin parameters instead of Bitcoin. So far, all tests have passed successfully once updated to match Dogecoin, however this work continues. On the bright side, it turns out we have a lot more automated tests than we realised, which is very useful for later development.
The source code repository for Dogecoin now also uses Travis CI, which sanity-checks patches submitted to the project, to help us catch any potential problems earlier, thanks to Tazz for leading the charge on that. This is particularly important as of course we’re developing on different platforms (Windows, OS X, Linux) and what works on one, may not work on others. Over time, this should be a significant time saver for the developers. For anyone wanting to help push Dogecoin forward, right now the most productive thing to be doing is testing either Dogecoin, or helping Bitcoin Core test pull requests. Feel free to drop by our Freenode channel for guidance on getting started with either.
Right now, I’m working on the full technical report on X11, and will then be back working on the payment protocol for Dogecoin. I’ve approached a few virus scanning software companies about offering their products for Dogecoin, with so far no response, but will update you all if I hear more.
Lastly, the next halvening (mining reward halving) is currently expected late on the 27th or early on the 28th, both times GMT. Given that it was initially expected on the 25th, we’re obviously seeing some slippage in estimates, and a total off the top of my head guess would be that we’ll see it around 0500 GMT on the 28th at this rate. I have taken the 28th off from the day job, and will be around both before and after in case of any problems (love you guys, not getting up at 5am to check on the blockchain, though!)
submitted by rnicoll to dogecoin [link] [comments]

About reducing the BCH block time, I have something to say...

I want to introduce myself by first (to avoid to be considered to be troll).
My name is Danny, Chinese, my first contact with Bitcoin was in 2013. My background is integrated circuit design. I studied C/C++ and linux in the college 15 years ago. I am not very familiar with open source software design, but my technology background is good enough to let me learn things quickly. I have developed a FPGA based SHA-256 miner and succesfully connect to the Eligius pool in early 2014 (Just for fun), all by C and verilog. I am a developer but not a professional software developer.
I am familiar with Bitcoin, transaction and block structure. I developed a program which can upload and download arbitrary file from/to the Bitcoincash blockchain. The downloader code is open sourced: https://github.com/bchfile/BCHFILE-extractor
I think I am not a troll.
Although many users and devs think Blocktime is not an issue, but a simple fact is that there is no single mainstream crypto choose a blocktime equal or larger than 10 minutes (No offense to anyone, I just express the idea that this is an undeniable proof that a shortter confirmation time has real needs).
For wallet users: If someone send me some BCH, although 0-conf gives some confidence, but I still need to wait for at least 1 confirmation to "make sure" someone else is not cheeting me (not 100%, but 1 confirm really means something), each more confirmation makes more confidence.
For nodes: You cannot spend a unconfirmed UTXO by default, you need to list the unspent UTXO and use createrawtransaction, signrawtransaction to manually create the TX and broadcast it. That means you need to wait for the TX to be confirmed before spend it by default.
Variance: (Here, I want to say sorry to many people especially some devs, in the previous posts, I did not show enough respect to them. In fact, the developers have done a lot of excellent work, most of which are unpaid, but not well known to the public.) The Variance is already been discussed by devs a long time, bobtail algorithm is a potential alternative, I have not figure it out by now, but reduce the block time can achieve a similar result, it's simple, 10 1-min block have an averagy effect, it has much less variance than 1 10-min block.
For exchanges: Obviously the exchanges plays the most important role in the crypto eco-system. Exchanges usually run "official" bitcoin cash nodes (bitcoin-abc), change the block time does not affect them (because the RPC call is not changed), the only affection is that they need to increase the confirmation numbers for deposit.
For developers: They need to upgrade the software before the HF just as the previous ones. Although change the blocktime is a major change, but in Code, the changes are rare, only a few lines of codes are affected for the core functions. (To approve this, I created a Bitcoin-abc fork in github, modified the blocktime to 2-min and reduce the subsidy to 1/5 at the same time, all the changes canbe seen here: https://github.com/Danyu-Wu/bitcoin-abc/commit/884414a04884a462c8e424ab1bde2fe632f59591). I spend 1 week to study the source code, and spend 2 days to complete the modification (Changes for test-code and some non-core functions are not completed yet), and 3 days for run the test (includes run a pool and connect to the testnet, in here you can find the blocks I mined in the testnet: https://www.blocktrail.com/tBCC/address/mmBG7ReKgGQgqhSZQjR28NvVDfeekjpnpV). I am not a professional programmer but can finish the core changes within 2-weeks, so it is clear that the change does not need much work.
----------------------------------------------------
In summary, I think change the blocktime maybe not the perfect consensus change for BCH, but that is the simplest one to improve the user experience significantly.
BTW: Anyone, especially developers who are interested in this topic, you can find the telegram group link here: https://github.com/Danyu-Wu/blocktime/blob/masteworkgroup.md
submitted by wudanyu to btc [link] [comments]

AMA with SINOVATE, a new GPU friendly coin with new innovations to the space

SINOVATE
What SINOVATE is aiming on Cryptocurrency Market?
SINOVATE is created for Innovation and it aims to keep bringing never before seen Innovations in the crypto market.
What is Infinity Nodes, why different from Classical Masternode System? Infinity Nodes are groundbreaking evolved masternodes that solves the inflation problem. Traditional masternodes start with high ROI but with very large inflation and that inflation is what inevitably makes them fail.
What is IDS, why is it better than cloud storage? And size providers how to get/ earn SIN?
IDS = Incorruptible Data Storage.
IDS is a peer-to-peer private networking system, which will permit transactions and storage between miners and Infinity Node owners. Competitors including Sia, Storj, BitTorrent and even IPFS solutions reward individuals for serving and hosting content on their hard drive space, which requires a 24/7 uptime for computers. User hard drives must remain open and the rewards received must justify the costs incurred for leaving computer online.
In IDS, the private networking of decentralized storage relies solely on the SINOVATE Blockchain, with only node owners receiving rewards as compensation for utilising their hard drive resources to run an Infinity Node. Node owners will get rewards both from the Infinity Nodes and from storing confidential data.
IDS will have 5 steps of evolution.
SINOVATE has 533 tp/s. How are you planning to use this as a use case?
Scalability is one of the biggest problems in cryptocurrencies. POS only or centralized cryptocurrencies have higher scalability but are not suitable for the original Satoshi plan. Satoshi Nakamoto’s dream was everybody to mine their own coins without being centralized so SINOVATE blockchain not only is the most scalable POW cryptocurrency but will also have much more increased scalability in the future. Mass adoption requires high scalability especially when it will be used in real life as a payment means. Are we going to see SINOVATE Payment System in the future?
SINOVATE payment gateway will be released this year with high scalability and less than 3 seconds transaction times with the help of FlashSend.
What is SINOVATE aiming with X25X Algorithm?
SINOVATE formerly SUQA always aimed at the ordinary user starting with the X22i custom algorithm and upgraded to X25X to fight the big hardware companies so everyone can mine their own coin without letting ASIC,FPGA companies dominate the network.
Algo Comparison Chart
We are committed to remaining ASIC / FPGA resistant and such use an ever evolving algorithm, the latest variation named X25X launched with the last update. It is protected from difficulty attacks using Dark Gravity Wave v3 and raises the memory requirements compared to X22i bt a factor of five making it harder for ASIC / FPGA to implement.
What is Komodo dPOW , and when is the plan implementation on SINOVATE?
dPoW diagram
KOMODO DPoW is a working and trusted 51 % Attack protection technology to prevent any kind of malicious attacks by the help of notarized data of Bitcoin, KOMODO and SINOVATE chain.
What is the current status on mobile wallets? We saw a mobile wallet trailer.
Mobile wallets will be released in July 2019 as a custom good looking wallet tailored to the specific needs of SIN Blockchain
What is the plan for adoption in real life SINOVATE?
Our team draws from a large diversity of skills from many areas of business and across many different industries. This allows us to design and hone the experience of interacting with the SINOVATE Blockchain at many levels, from developers, business leaders and operational levels, down to the end-user experience.
This allows us to develop software and user experiences from the perspective of all involved, ensuring that the end user is the primary focus.
What is the current financial status on SINOVATE?
SINOVATE are transparent about the financial status of the foundation and the activity taken with funds. We regularly publish updates and the latest one for June is here.
What partnerships will there be in the future?
Besides the Masternodes related partnerships, SINOVATE partnered with KOMODO for the integration of dPoW 51% attack protection, which will be active at the end of July or early August 2019.
As the foundation’s mission is to grow the space for all. We are happy to work with all projects and businesses both by learning from the great work others have undertaken and offering something back to other projects with our open source code.
With Governance what can it do for the community?
Decentralized governance is the future of any successful blockchain project, SINOVATE believes that blockchain will be ubiquitous in the underlying infrastructure and services in the future of everyday life. Having fair voting for developments, marketing and innovations of the SINOVATE chain will be very important for everyone.
Hopefully that covers as an introduction, please fire away below with any questions you might have for us!
EDIT - More questions and answers here: https://www.reddit.com/gpumining/comments/c6pir7/ama_with_sinovate_a_new_gpu_friendly_coin_with/?st=jxkx75wy&sh=ddd2b498
submitted by nick_badlands to sinovate [link] [comments]

XMR-Stak - proudly XMR-only mining network stack (and CPU miner)

I want to show off what I was working on for the past 7 weeks or so. Just to clarify (there seems to be a lot of "give me money" posts around here recently), it will be FOSS. This is not some kind of crowd funding attempt.
Of course the purpose of this topic is to gage interest - I want to be sure that it is worth my time to polish up "own-use grade" into release grade software, so if you like what you see please upvote and make a noise.
 

What do you mean by a network stack? What's wrong with the current one?

Network stack is essentially all the logic that lives between the hashing code and the output to the pool. While the software that I'm writing currently has a CPU miner on top, there is no reason why it can't be modified to hash through GPU.
Current stack used by the open source CPU miner and some GPU miners has been knocking around since 2011. Its design is less than ideal - command line args put a limit on how complex the configuration can get, and the flawed network interaction design means that it needs to keep talking to the pool (keep-alive) to detect that it is still there.
Most importantly though, the code was designed for Bitcoin. Cryptonight coins have hashing speeds many orders of magnitude slower, which leads to different design choices. For example both BTC and XMR have 32 bit nonce. That means you have slightly over 4 billion attempts to find a block and you need to add fudge code in BTC that is not needed in XMR.
 

CPU mining performance

I started off with Wolf's hashing code, but by the time I was done there are only a couple lines of code that are similar.
Performance is nearly identical to the closed source paid miners. Here are some numbers:
 

Output samples

One of the most annoying things for me about the old mining stack was that it kept spewing huge amounts of redundant information. XMR-Stak prints reports when you request it to do so instead. Here they are (taken from the X5650 system running on Arch).
HASHRATE REPORT | ID | 2.5s | 60s | 15m | ID | 2.5s | 60s | 15m | | 0 | 38.3 | 38.3 | 38.3 | 1 | 38.4 | 38.4 | 38.4 | | 2 | 38.4 | 38.3 | 38.3 | 3 | 38.4 | 38.4 | 38.4 | | 4 | 38.3 | 38.3 | 38.3 | 5 | 38.4 | 38.4 | 38.4 | | 6 | 38.3 | 38.3 | 38.3 | 7 | 38.4 | 38.4 | 38.4 | | 8 | 40.0 | 40.0 | 40.0 | 9 | 40.1 | 40.1 | 40.1 | | 10 | 40.0 | 40.0 | 40.0 | 11 | 40.1 | 40.1 | 40.1 | ----------------------------------------------------- Totals: 467.0 467.0 467.0 H/s Highest: 467.0 H/s 
Since this is a CLI server it is very uniform as you would expect. You can also see that some threads would gain 1.5H/s if they were on better NUMA nodes.
RESULT REPORT Difficulty : 8192 Good results : 316 / 316 (100.0 %) Avg result time : 17.9 sec Pool-side hashes : 2588672 Top 10 best results found: | 0 | 516321 | 1 | 488669 | | 2 | 391229 | 3 | 384157 | | 4 | 380941 | 5 | 379807 | | 6 | 347487 | 7 | 292038 | | 8 | 246997 | 9 | 244569 | Error details: Yay! No errors. 
And last one:
CONNECTION REPORT Connected since : 2016-12-19 20:21:38 Pool ping time : 141 ms Network error log: Yay! No errors. 
Sample config file is as follows:
http://pastebin.com/EqyvkWkB
 

Low power mode

This is a bit of an academic exercise, showing why I don't believe that memory latency is be-all and end-all of PoW. Idea is very simple. We do two hashes at a time, we double the performance (as we have more time to load data from L3). We are of course still constrained by the L3 cache, but FPGAs with 50-100MB of on-chip memory are out already.
 

Some things for the future

Let me know what you think.
-----BEGIN PGP PUBLIC KEY BLOCK----- Version: GnuPG v2 mQENBFhYUmUBCAC6493W5y1MMs38ApRbI11jWUqNdFm686XLkZWGDfYImzL6pEYk RdWkyt9ziCyA6NUeWFQYniv/z10RxYKq8ulVVJaKb9qPGMU0ESfdxlFNJkU/pf28 sEVBagGvGw8uFxjQONnBJ7y7iNRWMN7qSRS636wN5ryTHNsmqI4ClXPHkXkDCDUX QvhXZpG9RRM6jsE3jBGz/LJi3FyZLo/vB60OZBODJ2IA0wSR41RRiOq01OqDueva 9jPoAokNglJfn/CniQ+lqUEXj1vjAZ1D5Mn9fISzA/UPen5Z7Sipaa9aAtsDBOfP K9iPKOsWa2uTafoyXgiwEVXCCeMMUjCGaoFBABEBAAG0ImZpcmVpY2VfdWsgPGZp cmVpY2UueG1yQGdtYWlsLmNvbT6JATcEEwEIACEFAlhYUmUCGwMFCwkIBwIGFQgJ CgsCBBYCAwECHgECF4AACgkQ+yT3mn7UHDTEcQf8CMhqaZ0IOBxeBnsq5HZr2X6z E5bODp5cPs6ha1tjH3CWpk1AFeykNtXH7kPW9hcDt/e4UQtcHs+lu6YU59X7xLJQ udOkpWdmooJMXRWS/zeeon4ivT9d69jNnwubh8EJOyw8xm/se6n48BcewfHekW/6 mVrbhLbF1dnuUGXzRN1WxsUZx3uJd2UvrkJhAtHtX92/qIVhT0+3PXV0bmpHURlK YKhhm8dPLV9jPX8QVRHQXCOHSMqy/KoWEe6CnT0Isbkq3JtS3K4VBVeTX9gkySRc IFxrNJdXsI9BxKv4O8yajP8DohpoGLMDKZKSO0yq0BRMgMh0cw6Lk22uyulGALkB DQRYWFJlAQgAqikfViOmIccCZKVMZfNHjnigKtQqNrbJpYZCOImql4FqbZu9F7TD 9HIXA43SPcwziWlyazSy8Pa9nCpc6PuPPO1wxAaNIc5nt+w/x2EGGTIFGjRoubmP 3i5jZzOFYsvR2W3PgVa3/ujeYYJYo1oeVeuGmmJRejs0rp1mbvBSKw1Cq6C4cI0x GTY1yXFGLIgdfYNMmiLsTy1Qwq8YStbFKeUYAMMG3128SAIaT3Eet911f5Jx4tC8 6kWUr6PX1rQ0LQJqyIsLq9U53XybUksRfJC9IEfgvgBxRBHSD8WfqEhHjhW1VsZG dcYgr7A1PIneWsCEY+5VUnqTlt2HPaKweQARAQABiQEfBBgBCAAJBQJYWFJlAhsM AAoJEPsk95p+1Bw0Pr8H/0vZ6U2zaih03jOHOvsrYxRfDXSmgudOp1VS45aHIREd 2nrJ+drleeFVyb14UQqO/6iX9GuDX2yBEHdCg2aljeP98AaMU//RiEtebE6CUWsL HPVXHIkxwBCBe0YkJINHUQqLz/5f6qLsNUp1uTH2++zhdBWvg+gErTYbx8aFMFYH 0GoOtqE5rtlAh5MTvDZm+UcDwKJCxhrLaN3R3dDoyrDNRTgHQQuX5/opJBiUnVNK d+vugnxzpMIJQP11yCZkz/KxV8zQ2QPMuZdAoh3znd/vGCJcp0rWphn4pqxA4vDp c4hC0Yg9Dha1OoE5CJCqVL+ic4vAyB1urAwBlsd/wH8= =B5I+ -----END PGP PUBLIC KEY BLOCK----- 
submitted by fireice_uk to Monero [link] [comments]

This comment on BTC1 replay protection deserves its own thread.

I asked a question in another thread about what people meant by the blacklisting address on the BTC1 fork. User u/PM_ME_FPGA_TRICKS posted an excellent response which I wanted to share with the broader community since it explains not only this issue, but also how the replay protection is supposed to work for BTC1. I didn't realize how laughably bad their solution to replay protection really was until reading this. The comment is at:
https://www.reddit.com/Bitcoin/comments/74oi26/2x_is_already_dead_miners_will_not_mine_a_sha256/do05q0g/
and the text of the comment is below:
Only thing that is confirmed is that there is a blacklisted address in btc1, that cannot receive coins. If you try to send coins to this address, your transaction is invalid and cannot be mined. The code is in the repo in primitives/transaction.cpp
So, if you want to split your BTC into BTC and BTC1, then you would use your BTC client to send some money to the blacklisted address. This transaction would go through on BTC, and you would lose the funds you sent to the address, but your change would come back replay protected, and locked to BTC fork only. The transaction would be blacklisted on BTC1 and your coins would stay where they were. This is more complex than it sounds, because if you only send 1 sat to the blacklist, your wallet won't send all your coins in the transaction, so, not all your funds will be replay protected. This requires manual coin control or multi-recipient sending, which n00bs could easily screw up.
It's also a minor security risk for LN. Imagine you are using LN on BTC1. LN works by the customer and supplier agreeing to put money into an escrow address, and then in 1 final transaction the escrowed funds are divided up with final payment going to the supplier and change going to the customer. If the supplier and customer do not do any business, then the escrow times out, and the customer can recover their funds from the escrow address. However, if the customer sets up the LN payment to send change to the blacklisted address, then the channel's final payment will be blacklisted, the escrow account will time out and the funds can be recovered by the customer. This is, of course, trivial to work around - any LN client on BTC1 just needs to check that the address isn't blacklisted. Hardly rocket science, but still undesirable, and a source for code bloat and potential errors.
submitted by andrewbuck40 to Bitcoin [link] [comments]

The Problem with PoW


Miners have always had it rough..
"Frustrated Miners"


The Problem with PoW
(and what is being done to solve it)

Proof of Work (PoW) is one of the most commonly used consensus mechanisms entrusted to secure and validate many of today’s most successful cryptocurrencies, Bitcoin being one. Battle-hardened and having weathered the test of time, Bitcoin has demonstrated the undeniable strength and reliability of the PoW consensus model through sheer market saturation, and of course, its persistency.
In addition to the cost of powerful computing hardware, miners prove that they are benefiting the network by expending energy in the form of electricity, by solving and hashing away complex math problems on their computers, utilizing any suitable tools that they have at their disposal. The mathematics involved in securing proof of work revolve around unique algorithms, each with their own benefits and vulnerabilities, and can require different software/hardware to mine depending on the coin.
Because each block has a unique and entirely random hash, or “puzzle” to solve, the “work” has to be performed for each block individually and the difficulty of the problem can be increased as the speed at which blocks are solved increases.
Hashrates and Hardware Types
While proof of work is an effective means of securing a blockchain, it inherently promotes competition amongst miners seeking higher and higher hashrates due to the rewards earned by the node who wins the right to add the next block. In turn, these higher hash rates benefit the blockchain, providing better security when it’s a result of a well distributed/decentralized network of miners.
When Bitcoin first launched its genesis block, it was mined exclusively by CPUs. Over the years, various programmers and developers have devised newer, faster, and more energy efficient ways to generate higher hashrates; some by perfecting the software end of things, and others, when the incentives are great enough, create expensive specialized hardware such as ASICs (application-specific integrated circuit). With the express purpose of extracting every last bit of hashing power, efficiency being paramount, ASICs are stripped down, bare minimum, hardware representations of a specific coin’s algorithm.
This gives ASICS a massive advantage in terms of raw hashing power and also in terms of energy consumption against CPUs/GPUs, but with significant drawbacks of being very expensive to design/manufacture, translating to a high economic barrier for the casual miner. Due to the fact that they are virtual hardware representations of a single targeted algorithm, this means that if a project decides to fork and change algorithms suddenly, your powerful brand-new ASIC becomes a very expensive paperweight. The high costs in developing and manufacturing ASICs and the associated risks involved, make them unfit for mass adoption at this time.
Somewhere on the high end, in the vast hashrate expanse created between GPU and ASIC, sits the FPGA (field programmable gate array). FPGAs are basically ASICs that make some compromises with efficiency in order to have more flexibility, namely they are reprogrammable and often used in the “field” to test an algorithm before implementing it in an ASIC. As a precursor to the ASIC, FPGAs are somewhat similar to GPUs in their flexibility, but require advanced programming skills and, like ASICs, are expensive and still fairly uncommon.
2 Guys 1 ASIC
One of the issues with proof of work incentivizing the pursuit of higher hashrates is in how the network calculates block reward coinbase payouts and rewards miners based on the work that they have submitted. If a coin generated, say a block a minute, and this is a constant, then what happens if more miners jump on a network and do more work? The network cannot pay out more than 1 block reward per 1 minute, and so a difficulty mechanism is used to maintain balance. The difficulty will scale up and down in response to the overall nethash, so if many miners join the network, or extremely high hashing devices such as ASICs or FPGAs jump on, the network will respond accordingly, using the difficulty mechanism to make the problems harder, effectively giving an edge to hardware that can solve them faster, balancing the network. This not only maintains the block a minute reward but it has the added side-effect of energy requirements that scale up with network adoption.
Imagine, for example, if one miner gets on a network all alone with a CPU doing 50 MH/s and is getting all 100 coins that can possibly be paid out in a day. Then, if another miner jumps on the network with the same CPU, each miner would receive 50 coins in a day instead of 100 since they are splitting the required work evenly, despite the fact that the net electrical output has doubled along with the work. Electricity costs miner’s money and is a factor in driving up coin price along with adoption, and since more people are now mining, the coin is less centralized. Now let’s say a large corporation has found it profitable to manufacture an ASIC for this coin, knowing they will make their money back mining it or selling the units to professionals. They join the network doing 900 MH/s and will be pulling in 90 coins a day, while the two guys with their CPUs each get 5 now. Those two guys aren’t very happy, but the corporation is. Not only does this negatively affect the miners, it compromises the security of the entire network by centralizing the coin supply and hashrate, opening the doors to double spends and 51% attacks from potential malicious actors. Uncertainty of motives and questionable validity in a distributed ledger do not mix.
When technology advances in a field, it is usually applauded and welcomed with open arms, but in the world of crypto things can work quite differently. One of the glaring flaws in the current model and the advent of specialized hardware is that it’s never ending. Suppose the two men from the rather extreme example above took out a loan to get themselves that ASIC they heard about that can get them 90 coins a day? When they join the other ASIC on the network, the difficulty adjusts to keep daily payouts consistent at 100, and they will each receive only 33 coins instead of 90 since the reward is now being split three ways. Now what happens if a better ASIC is released by that corporation? Hopefully, those two guys were able to pay off their loans and sell their old ASICs before they became obsolete.
This system, as it stands now, only perpetuates a never ending hashrate arms race in which the weapons of choice are usually a combination of efficiency, economics, profitability and in some cases control.
Implications of Centralization
This brings us to another big concern with expensive specialized hardware: the risk of centralization. Because they are so expensive and inaccessible to the casual miner, ASICs and FPGAs predominantly remain limited to a select few. Centralization occurs when one small group or a single entity controls the vast majority hash power and, as a result, coin supply and is able to exert its influence to manipulate the market or in some cases, the network itself (usually the case of dishonest nodes or bad actors).
This is entirely antithetical of what cryptocurrency was born of, and since its inception many concerted efforts have been made to avoid centralization at all costs. An entity in control of a centralized coin would have the power to manipulate the price, and having a centralized hashrate would enable them to affect network usability, reliability, and even perform double spends leading to the demise of a coin, among other things.
The world of crypto is a strange new place, with rapidly growing advancements across many fields, economies, and boarders, leaving plenty of room for improvement; while it may feel like a never-ending game of catch up, there are many talented developers and programmers working around the clock to bring us all more sustainable solutions.
The Rise of FPGAs
With the recent implementation of the commonly used coding language C++, and due to their overall flexibility, FPGAs are becoming somewhat more common, especially in larger farms and in industrial setting; but they still remain primarily out of the hands of most mining enthusiasts and almost unheard of to the average hobby miner. Things appear to be changing though, one example of which I’ll discuss below, and it is thought by some, that soon we will see a day when mining with a CPU or GPU just won’t cut it any longer, and the market will be dominated by FPGAs and specialized ASICs, bringing with them efficiency gains for proof of work, while also carelessly leading us all towards the next round of spending.
A perfect real-world example of the effect specialized hardware has had on the crypto-community was recently discovered involving a fairly new project called VerusCoin and a fairly new, relatively more economically accessible FPGA. The FPGA is designed to target specific alt-coins whose algo’s do not require RAM overhead. It was discovered the company had released a new algorithm, kept secret from the public, which could effectively mine Verus at 20x the speed of GPUs, which were the next fastest hardware types mining on the Verus network.
Unfortunately this was done with a deliberately secret approach, calling the Verus algorithm “Algo1” and encouraging owners of the FPGA to never speak of the algorithm in public channels, admonishing a user when they did let the cat out of the bag. The problem with this business model is that it is parasitic in nature. In an ecosystem where advancements can benefit the entire crypto community, this sort of secret mining approach also does not support the philosophies set forth by the Bitcoin or subsequent open source and decentralization movements.
Although this was not done in the spirit of open source, it does hint to an important step in hardware innovation where we could see more efficient specialized systems within reach of the casual miner. The FPGA requires unique sets of data called a bitstream in order to be able to recognize each individual coin’s algorithm and mine them. Because it’s reprogrammable, with the support of a strong development team creating such bitstreams, the miner doesn’t end up with a brick if an algorithm changes.
All is not lost thanks to.. um.. Technology?
Shortly after discovering FPGAs on the network, the Verus developers quickly designed, tested, and implemented a new, much more complex and improved algorithm via a fork that enabled Verus to transition smoothly from VerusHash 1.0 to VerusHash 2.0 at block 310,000. Since the fork, VerusHash 2.0 has demonstrated doing exactly what it was designed for- equalizing hardware performance relative to the device being used while enabling CPUs (the most widely available “ASICs”) to mine side by side with GPUs, at a profit and it appears this will also apply to other specialized hardware. This is something no other project has been able to do until now. Rather than pursue the folly of so many other projects before it- attempting to be “ASIC proof”, Verus effectively achieved and presents to the world an entirely new model of “hardware homogeny”. As the late, great, Bruce Lee once said- “Don’t get set into one form, adapt it and build your own, and let it grow, be like water.”
In the design of VerusHash 2.0, Verus has shown it doesn’t resist progress like so many other new algorithms try to do, it embraces change and adapts to it in the way that water becomes whatever vessel it inhabits. This new approach- an industry first- could very well become an industry standard and in doing so, would usher in a new age for proof of work based coins. VerusHash 2.0 has the potential to correct the single largest design flaw in the proof of work consensus mechanism- the ever expanding monetary and energy requirements that have plagued PoW based projects since the inception of the consensus mechanism. Verus also solves another major issue of coin and net hash centralization by enabling legitimate CPU mining, offering greater coin and hashrate distribution.
Digging a bit deeper it turns out the Verus development team are no rookies. The lead developer Michael F Toutonghi has spent decades in the field programming and is a former Vice President and Technical Fellow at Microsoft, recognized founder and architect of Microsoft's .Net platform, ex-Technical Fellow of Microsoft's advertising platform, ex-CTO, Parallels Corporation, and an experienced distributed computing and machine learning architect. The project he helped create employs and makes use of a diverse myriad of technologies and security features to form one of the most advanced and secure cryptocurrency to date. A brief description of what makes VerusCoin special quoted from a community member-
"Verus has a unique and new consensus algorithm called Proof of Power which is a 50% PoW/50% PoS algorithm that solves theoretical weaknesses in other PoS systems (Nothing at Stake problem for example) and is provably immune to 51% hash attacks. With this, Verus uses the new hash algorithm, VerusHash 2.0. VerusHash 2.0 is designed to better equalize mining across all hardware platforms, while favoring the latest CPUs over older types, which is also one defense against the centralizing potential of botnets. Unlike past efforts to equalize hardware hash-rates across different hardware types, VerusHash 2.0 explicitly enables CPUs to gain even more power relative to GPUs and FPGAs, enabling the most decentralizing hardware, CPUs (due to their virtually complete market penetration), to stay relevant as miners for the indefinite future. As for anonymity, Verus is not a "forced private", allowing for both transparent and shielded (private) transactions...and private messages as well"
If other projects can learn from this and adopt a similar approach or continue to innovate with new ideas, it could mean an end to all the doom and gloom predictions that CPU and GPU mining are dead, offering a much needed reprieve and an alternative to miners who have been faced with the difficult decision of either pulling the plug and shutting down shop or breaking down their rigs to sell off parts and buy new, more expensive hardware…and in so doing present an overall unprecedented level of decentralization not yet seen in cryptocurrency.
Technological advancements led us to the world of secure digital currencies and the progress being made with hardware efficiencies is indisputably beneficial to us all. ASICs and FPGAs aren’t inherently bad, and there are ways in which they could be made more affordable and available for mass distribution. More than anything, it is important that we work together as communities to find solutions that can benefit us all for the long term.
In an ever changing world where it may be easy to lose sight of the real accomplishments that brought us to this point one thing is certain, cryptocurrency is here to stay and the projects that are doing something to solve the current problems in the proof of work consensus mechanism will be the ones that lead us toward our collective vision of a better world- not just for the world of crypto but for each and every one of us.
submitted by Godballz to EtherMining [link] [comments]

The Problem with PoW

The Problem with PoW

Miners have always had it rough..
"Frustrated Miners"


The Problem with PoW
(and what is being done to solve it)

Proof of Work (PoW) is one of the most commonly used consensus mechanisms entrusted to secure and validate many of today’s most successful cryptocurrencies, Bitcoin being one. Battle-hardened and having weathered the test of time, Bitcoin has demonstrated the undeniable strength and reliability of the PoW consensus model through sheer market saturation, and of course, its persistency.
In addition to the cost of powerful computing hardware, miners prove that they are benefiting the network by expending energy in the form of electricity, by solving and hashing away complex math problems on their computers, utilizing any suitable tools that they have at their disposal. The mathematics involved in securing proof of work revolve around unique algorithms, each with their own benefits and vulnerabilities, and can require different software/hardware to mine depending on the coin.
Because each block has a unique and entirely random hash, or “puzzle” to solve, the “work” has to be performed for each block individually and the difficulty of the problem can be increased as the speed at which blocks are solved increases.
Hashrates and Hardware Types
While proof of work is an effective means of securing a blockchain, it inherently promotes competition amongst miners seeking higher and higher hashrates due to the rewards earned by the node who wins the right to add the next block. In turn, these higher hash rates benefit the blockchain, providing better security when it’s a result of a well distributed/decentralized network of miners.
When Bitcoin first launched its genesis block, it was mined exclusively by CPUs. Over the years, various programmers and developers have devised newer, faster, and more energy efficient ways to generate higher hashrates; some by perfecting the software end of things, and others, when the incentives are great enough, create expensive specialized hardware such as ASICs (application-specific integrated circuit). With the express purpose of extracting every last bit of hashing power, efficiency being paramount, ASICs are stripped down, bare minimum, hardware representations of a specific coin’s algorithm.
This gives ASICS a massive advantage in terms of raw hashing power and also in terms of energy consumption against CPUs/GPUs, but with significant drawbacks of being very expensive to design/manufacture, translating to a high economic barrier for the casual miner. Due to the fact that they are virtual hardware representations of a single targeted algorithm, this means that if a project decides to fork and change algorithms suddenly, your powerful brand-new ASIC becomes a very expensive paperweight. The high costs in developing and manufacturing ASICs and the associated risks involved, make them unfit for mass adoption at this time.
Somewhere on the high end, in the vast hashrate expanse created between GPU and ASIC, sits the FPGA (field programmable gate array). FPGAs are basically ASICs that make some compromises with efficiency in order to have more flexibility, namely they are reprogrammable and often used in the “field” to test an algorithm before implementing it in an ASIC. As a precursor to the ASIC, FPGAs are somewhat similar to GPUs in their flexibility, but require advanced programming skills and, like ASICs, are expensive and still fairly uncommon.
2 Guys 1 ASIC
One of the issues with proof of work incentivizing the pursuit of higher hashrates is in how the network calculates block reward coinbase payouts and rewards miners based on the work that they have submitted. If a coin generated, say a block a minute, and this is a constant, then what happens if more miners jump on a network and do more work? The network cannot pay out more than 1 block reward per 1 minute, and so a difficulty mechanism is used to maintain balance. The difficulty will scale up and down in response to the overall nethash, so if many miners join the network, or extremely high hashing devices such as ASICs or FPGAs jump on, the network will respond accordingly, using the difficulty mechanism to make the problems harder, effectively giving an edge to hardware that can solve them faster, balancing the network. This not only maintains the block a minute reward but it has the added side-effect of energy requirements that scale up with network adoption.
Imagine, for example, if one miner gets on a network all alone with a CPU doing 50 MH/s and is getting all 100 coins that can possibly be paid out in a day. Then, if another miner jumps on the network with the same CPU, each miner would receive 50 coins in a day instead of 100 since they are splitting the required work evenly, despite the fact that the net electrical output has doubled along with the work. Electricity costs miner’s money and is a factor in driving up coin price along with adoption, and since more people are now mining, the coin is less centralized. Now let’s say a large corporation has found it profitable to manufacture an ASIC for this coin, knowing they will make their money back mining it or selling the units to professionals. They join the network doing 900 MH/s and will be pulling in 90 coins a day, while the two guys with their CPUs each get 5 now. Those two guys aren’t very happy, but the corporation is. Not only does this negatively affect the miners, it compromises the security of the entire network by centralizing the coin supply and hashrate, opening the doors to double spends and 51% attacks from potential malicious actors. Uncertainty of motives and questionable validity in a distributed ledger do not mix.
When technology advances in a field, it is usually applauded and welcomed with open arms, but in the world of crypto things can work quite differently. One of the glaring flaws in the current model and the advent of specialized hardware is that it’s never ending. Suppose the two men from the rather extreme example above took out a loan to get themselves that ASIC they heard about that can get them 90 coins a day? When they join the other ASIC on the network, the difficulty adjusts to keep daily payouts consistent at 100, and they will each receive only 33 coins instead of 90 since the reward is now being split three ways. Now what happens if a better ASIC is released by that corporation? Hopefully, those two guys were able to pay off their loans and sell their old ASICs before they became obsolete.
This system, as it stands now, only perpetuates a never ending hashrate arms race in which the weapons of choice are usually a combination of efficiency, economics, profitability and in some cases control.
Implications of Centralization
This brings us to another big concern with expensive specialized hardware: the risk of centralization. Because they are so expensive and inaccessible to the casual miner, ASICs and FPGAs predominantly remain limited to a select few. Centralization occurs when one small group or a single entity controls the vast majority hash power and, as a result, coin supply and is able to exert its influence to manipulate the market or in some cases, the network itself (usually the case of dishonest nodes or bad actors).
This is entirely antithetical of what cryptocurrency was born of, and since its inception many concerted efforts have been made to avoid centralization at all costs. An entity in control of a centralized coin would have the power to manipulate the price, and having a centralized hashrate would enable them to affect network usability, reliability, and even perform double spends leading to the demise of a coin, among other things.
The world of crypto is a strange new place, with rapidly growing advancements across many fields, economies, and boarders, leaving plenty of room for improvement; while it may feel like a never-ending game of catch up, there are many talented developers and programmers working around the clock to bring us all more sustainable solutions.
The Rise of FPGAs
With the recent implementation of the commonly used coding language C++, and due to their overall flexibility, FPGAs are becoming somewhat more common, especially in larger farms and in industrial setting; but they still remain primarily out of the hands of most mining enthusiasts and almost unheard of to the average hobby miner. Things appear to be changing though, one example of which I’ll discuss below, and it is thought by some, that soon we will see a day when mining with a CPU or GPU just won’t cut it any longer, and the market will be dominated by FPGAs and specialized ASICs, bringing with them efficiency gains for proof of work, while also carelessly leading us all towards the next round of spending.
A perfect real-world example of the effect specialized hardware has had on the crypto-community was recently discovered involving a fairly new project called VerusCoin and a fairly new, relatively more economically accessible FPGA. The FPGA is designed to target specific alt-coins whose algo’s do not require RAM overhead. It was discovered the company had released a new algorithm, kept secret from the public, which could effectively mine Verus at 20x the speed of GPUs, which were the next fastest hardware types mining on the Verus network.
Unfortunately this was done with a deliberately secret approach, calling the Verus algorithm “Algo1” and encouraging owners of the FPGA to never speak of the algorithm in public channels, admonishing a user when they did let the cat out of the bag. The problem with this business model is that it is parasitic in nature. In an ecosystem where advancements can benefit the entire crypto community, this sort of secret mining approach also does not support the philosophies set forth by the Bitcoin or subsequent open source and decentralization movements.
Although this was not done in the spirit of open source, it does hint to an important step in hardware innovation where we could see more efficient specialized systems within reach of the casual miner. The FPGA requires unique sets of data called a bitstream in order to be able to recognize each individual coin’s algorithm and mine them. Because it’s reprogrammable, with the support of a strong development team creating such bitstreams, the miner doesn’t end up with a brick if an algorithm changes.
All is not lost thanks to.. um.. Technology?
Shortly after discovering FPGAs on the network, the Verus developers quickly designed, tested, and implemented a new, much more complex and improved algorithm via a fork that enabled Verus to transition smoothly from VerusHash 1.0 to VerusHash 2.0 at block 310,000. Since the fork, VerusHash 2.0 has demonstrated doing exactly what it was designed for- equalizing hardware performance relative to the device being used while enabling CPUs (the most widely available “ASICs”) to mine side by side with GPUs, at a profit and it appears this will also apply to other specialized hardware. This is something no other project has been able to do until now. Rather than pursue the folly of so many other projects before it- attempting to be “ASIC proof”, Verus effectively achieved and presents to the world an entirely new model of “hardware homogeny”. As the late, great, Bruce Lee once said- “Don’t get set into one form, adapt it and build your own, and let it grow, be like water.”
In the design of VerusHash 2.0, Verus has shown it doesn’t resist progress like so many other new algorithms try to do, it embraces change and adapts to it in the way that water becomes whatever vessel it inhabits. This new approach- an industry first- could very well become an industry standard and in doing so, would usher in a new age for proof of work based coins. VerusHash 2.0 has the potential to correct the single largest design flaw in the proof of work consensus mechanism- the ever expanding monetary and energy requirements that have plagued PoW based projects since the inception of the consensus mechanism. Verus also solves another major issue of coin and net hash centralization by enabling legitimate CPU mining, offering greater coin and hashrate distribution.
Digging a bit deeper it turns out the Verus development team are no rookies. The lead developer Michael F Toutonghi has spent decades in the field programming and is a former Vice President and Technical Fellow at Microsoft, recognized founder and architect of Microsoft's .Net platform, ex-Technical Fellow of Microsoft's advertising platform, ex-CTO, Parallels Corporation, and an experienced distributed computing and machine learning architect. The project he helped create employs and makes use of a diverse myriad of technologies and security features to form one of the most advanced and secure cryptocurrency to date. A brief description of what makes VerusCoin special quoted from a community member-
"Verus has a unique and new consensus algorithm called Proof of Power which is a 50% PoW/50% PoS algorithm that solves theoretical weaknesses in other PoS systems (Nothing at Stake problem for example) and is provably immune to 51% hash attacks. With this, Verus uses the new hash algorithm, VerusHash 2.0. VerusHash 2.0 is designed to better equalize mining across all hardware platforms, while favoring the latest CPUs over older types, which is also one defense against the centralizing potential of botnets. Unlike past efforts to equalize hardware hash-rates across different hardware types, VerusHash 2.0 explicitly enables CPUs to gain even more power relative to GPUs and FPGAs, enabling the most decentralizing hardware, CPUs (due to their virtually complete market penetration), to stay relevant as miners for the indefinite future. As for anonymity, Verus is not a "forced private", allowing for both transparent and shielded (private) transactions...and private messages as well"
If other projects can learn from this and adopt a similar approach or continue to innovate with new ideas, it could mean an end to all the doom and gloom predictions that CPU and GPU mining are dead, offering a much needed reprieve and an alternative to miners who have been faced with the difficult decision of either pulling the plug and shutting down shop or breaking down their rigs to sell off parts and buy new, more expensive hardware…and in so doing present an overall unprecedented level of decentralization not yet seen in cryptocurrency.
Technological advancements led us to the world of secure digital currencies and the progress being made with hardware efficiencies is indisputably beneficial to us all. ASICs and FPGAs aren’t inherently bad, and there are ways in which they could be made more affordable and available for mass distribution. More than anything, it is important that we work together as communities to find solutions that can benefit us all for the long term.
In an ever changing world where it may be easy to lose sight of the real accomplishments that brought us to this point one thing is certain, cryptocurrency is here to stay and the projects that are doing something to solve the current problems in the proof of work consensus mechanism will be the ones that lead us toward our collective vision of a better world- not just for the world of crypto but for each and every one of us.
submitted by Godballz to gpumining [link] [comments]

SUQA coin currency supporting X22i algorithm

SUQA coin currency supporting X22i algorithm

https://preview.redd.it/6iunr5ocgn021.png?width=800&format=png&auto=webp&s=9c2a77616015f026d953076cbb29e79d6abd9b61
I want to tell you about a currency supporting an X22i algorithm; SUQA coin. SUQA team is confident that the algorithm will be stable enough to perform mining on a quantum computer, This will primarily involve the existence of a problem such as insufficient memory, typical of such devices. In addition to the ability to implement a new mining algorithm, SUQA will also feature a high transaction rate of more 530 transactions per second at low transaction costs. This is 75 times Bitcoin.


https://preview.redd.it/lpy9139hgn021.png?width=599&format=png&auto=webp&s=f4ceb148f367791214a18de10f6d5d913782b66e
And about ecosystem, I would say that simply by reading this article or chatting with a friend about SUQA you have already begun to experience being a part of the ecosystem. An ecosystem as it pertains to blockchain technology is everything that works together to leverage the blockchain technology towards some meaningful purpose rather that be providing a service, product, or some other utility.
At its core, SUQA blockchain ecosystem focuses on support of blockchain startups, cryptolancers, and charities. This means SUQA can be very multifaceted and integral by the nature of this core setup. Startups can lead to FINTEC advancements which can lead to attracting more blockchain cryptolancers which can spur additional synergies for the betterment of our community and even our chosen charitable causes. The illustration below describes the basics of SUQA blockchain ecosystem
SUQA Specifications:
Coin name: SUQA
Ticker : SUQA
Algorithm : X22i (Dedicated FPGA/ASIC Resistance)
Coin Type: POW
**Max. supply:**1,078,740,313+10% Development Budget
Block Time: 2 minutes
Max Block Size: 16mb
Max tx/s: 533 tx/s (Fastest KnownPOW Tx/s)
Difficulty Retarget Algorithm: DarkGravityV3
RPC port: 20971
P2P port: 20970
Ico: No
Pre-Mine: No
Masternode: No
Pre-Sale: No
Development-Budget: 10%
Genesis: 26 September, 2018
Block rewards:
1 to 22,000: 10,000 = 220,000,000 22,001 to 50,000: 5,000 = 139,995,000 50,001 to 100,000: 2.500 =124,997,500 100,001 to 200,000: 1.250 = 124,998,750 200,001 to 400.000: 625= 124,999,375 400,001 to 1,500,000: 312,5=343,749,688 TOTAL SUPPLY: 1,078,740,313 plus 10% for founders fee will be mined in 5.78 years.
MAX TOTAL : 1,186,614,344 SUQA
The next feature of the SUQA electronic money is to give investors the opportunity to make money from their deposits. The amount of such income will be 5 percent of the contribution. All this is thanks to the expandable memory size of the X22i. The first three months would be 25 percent. An important point of the entire SUQA ecosystem is issues such as confidentiality and transparency in the implementation of all internal actions. Security is provided through the unique address of the wallet, which makes SUQA a coin a trustworthy digital asset. Transparency is guaranteed by the openness of the source code. SUQA has started to attract more and more attention from people who are interested in electronic money.
Website: https://suqa.org
Whitepaper: https://suqa.org/file/2018/10/suqa-whitepaper.pdf
BitcoinTalk: https://bitcointalk.org/index.php?topic=5038269.0
GitHub: https://github.com/SUQAORG
Twitter: https://twitter.com/SUQAfoundation
Facebook: https://facebook.com/SUQAFoundation
Discord: https://discord.gg/qrtU7Y9
Author: uk baxoi
profile: https://bitcointalk.org/index.php?action=profile;u=2364181
submitted by mahdi32 to stealthcrypto [link] [comments]

T4D #84 - Pt 2 Bitcoin Mining, BFL ASIC vs FPGA vs GPU vs CPU Bitcoin Mining with FPGAs (EC551 Final Project) BitCoin Mining FPGA Card - YouTube Setup & Mining of the ATOM MINER AM01 FPGA! ELE 432- FPGA Bitcoin Miner

“Half-Fast” Bitcoin Miner: Open-Source Bitcoin Mining with FPGA Introduction Bitcoin protocol Project design High-level block diagram Miner Hardware Implementation Software Implementation Software and Hardware Testing Methodology System Console Performance Comparison Milestones Lessons Learned Contributions References Source code 2 BTCMiner is a Bitcoin Miner software which allows you to make money with your ZTEX USB-FPGA Module. Since these FPGA Boards contain an USB interface no additional hardware (like JTAG programmer) is required and low cost FPGA-clusters can be build using standard USB hubs. Features. Supported FPGA Boards: Spartan 6 USB-FPGA Module 1.15b with XC6SLX75: 90 MH/s (typical) Spartan 6 USB-FPGA Module ... This is the first open source FPGA Bitcoin miner. It was released on May 20, 2011. Contents. 1 Software needed; 2 Compiling. 2.1 Altera; 2.2 Changing the clock speed; 3 Programming the FPGA. 3.1 Altera; 3.2 Using urjtag; 4 Mining. 4.1 Altera; 5 See Also; 6 External Links; 7 Source; Software needed . Currently programming and running the FPGAminer code' requires Quartus II for Altera devices ... source Verilog source code. Ztex and Cairnsmore CM1 ----- Ports for the Ztex 1.15y and Cairnsmore CM1 quad boards are available in the experimental folder. Both achieve around 60kHash/sec (total for all four FPGA devices) using a single core and 16 threads (identical to the current ICARUS-LX150 code). A customised version of cgminer 3.1.1 must ... A completely open source implementation of a Bitcoin Miner for Altera and Xilinx FPGAs. This project hopes to promote the free and open development of FPGA based mining solutions and secure the future of the Bitcoin project as a whole. A binary release is currently available for the Terasic DE2-115

[index] [44698] [5451] [6716] [1323] [18147] [11718] [6932] [3548] [34119] [21519]

T4D #84 - Pt 2 Bitcoin Mining, BFL ASIC vs FPGA vs GPU vs CPU

Blackminer F1 Mini after 2 months: Costs & Earnings (FPGA Mining by Hash Altcoin) - Duration: 4:54. ... USB Bitcoin Miner - The Power of 1000's Computers - Duration: 15:24. How Much? 338,951 views ... TPS1525 FPGA Mining Air Cooler 0xToken 15G 20181002 with PowerSupply - Duration: 0 ... (3M Novec Immersion Cooling for Bitcoin Mining) - Duration: 1:02. AlliedControl1 15,801 views. 1:02. FPGA ... BitCoin Mining FPGA Card - Duration: 4:06. CarlsTechShed 97,578 views. 4:06. The Outlook on Cryptocurrency Mining ... Bitcoin Mining with FPGAs (EC551 Final Project) - Duration: 6:11. Advanced ... I review the top ASIC miners, touch an GPU mining ETH, and show some of my research on FPGA mining Ethereums Ethash PoW algorithm -- which is NOT FPGA resistant/proof. Mining Rig Parts IN STOCK on ... VoskCoin livestream on the Outlook on Cryptocurrency Mining - GPU vs ASIC vs FPGA with Q&A. Text version of todays video - http://bit.ly/2LaZA5R -- The lands...

#