Bitwise launches spot XRP ETF; LeanHash provides holders with a stable channel to earn 7,000 XRP per day

XRP’s new spot ETF is drawing institutional attention, while retail investors are turning to LeanHash for daily XRP earnings during market volatility. With Bitwise Asset Management announcing the official listing of its spot XRP ETF on the NYSE, market focus has once again returned to this long-active payment digital asset. Against the backdrop of the continued rise in altcoin funds and the influx of institutional funds, XRP is becoming a “convergence point” between traditional capital markets and the blockchain world. The ETF’s debut, with its 0. 34% management fee, $500 million management fee waiver policy, and high on-chain activity, has attracted institutional investors, pushing XRP to a higher level of compliance and professionalism. However, unlike institutions choosing ETFs as long-term allocation tools, ordinary investors are more concerned with how to obtain tangible cash flow during periods of high price volatility. ETFs are stable but do not generate daily returns, and this structural difference in demand is quietly amplifying. Therefore, a new trend is rapidly emerging: more and more XRP holders are flocking to LeanHash, a leading global platform for mining-based digital assets, to earn daily passive income by participating in mining and distribution mechanisms. LeanHash drives returns with real computing power, freeing holders from dependence on market fluctuations and ensuring a stable cash flow even during periods of low prices or sideways movement. This makes LeanHash the most popular way for retail investors to generate XRP revenue outside of ETFs, and it also fills the gap in market strategy polarization. Institutions use ETFs to allocate long-term value, while retail investors use LeanHash to obtain daily returns, jointly driving the ecosystem expansion and value reshaping of XRP in 2025. How to get started with LeanHash 1. Visit the LeanHash website and create an account to receive a $15 bonus. 2. Choose a suitable contract term based on individual budget and expected returns. 3. Start mining. Earnings are calculated daily. LeanHash Computing Power Contract Examples: • Entry-level Contract Investment: $100 | Term: 2 days | Principal + Return: $107 • Basic Computing Power Contract Investment: $1200 | Term: 13 days | Principal + Return: $1412. 16 • Intermediate Computing Power Contract Investment: $5300 | Term: 33 days | Principal + Return: $8,045. 90 • High-Performance Computing Contract Investment: $12,000 | Term: 42 days | Principal + Return: $20,870. 40 • High-Performance Computing Contract Investment: $37,000 | Term: 47 days | Principal + Return: $70,736. 60 • Supercomputer Contract Investment: $120,000 | Term: 51 days | Investment Principal + Returns: $257,700 Example: Invest $12,000 to purchase a 40-day high-performance computing contract with a daily return of 1. 76%. Upon successful purchase, the user will receive a stable daily return: $12,000 x 1. 76% = $211. 20. After 40 days, the principal plus returns will be: $12,000 + $211. 20 x 42 days = $12,000 + $8,870. 40 = $20,870. 40. This platform offers a variety of stable, high-yield contracts. Please visit the LeanHash website for details. Why choose LeanHash? 1. Global deployment: LeanHash operates data centers in over 70 regions, which have been operating securely and reliably for over eight years. 2. Green energy: LeanHash uses 100% renewable energy, setting a new benchmark for environmentally friendly mining. 3. Bank-grade security: SSL encryption and cold wallet storage ensure comprehensive protection of assets. 4. Compliance guarantee: Headquartered in the UK, with relevant registration and compliance certifications. 5. Stable returns: Fixed contracts, transparent fees, and low entry barriers. 6. Lightning-fast service: 24×7 customer support with a response time of within 3 minutes. 7. Multi-currency compatibility: Supports deposits and withdrawals of major cryptocurrencies such as BTC, ETH, XRP, DOGE, LTC, USDT, SOL, and BNB. About LeanHash LeanHash is a leading intelligent cloud computing platform. With HashFi at its core, it provides stable, sustainable, and low-risk daily yield solutions for holders of mainstream assets such as BTC, ETH, and XRP. Through global computing nodes and advanced AI scheduling algorithms, LeanHash has established a price-independent daily passive income stream for over 3 million users. Conclusion The launch of the Bitwise XRP ETF is undoubtedly a significant milestone for the market, symbolizing the traditional financial world’s recognition of XRP’s long-term value. However, what truly allows XRP to maintain its vitality during the current volatile period is on-chain infrastructure like LeanHash. For institutions, the ETF makes XRP a configurable asset; for retail investors, LeanHash makes XRP a sustainable yield asset. Together, they are accelerating XRP’s evolution from a “speculative asset” to a “functional asset + cash flow asset.”.
https://crypto.news/bitwise-launches-spot-xrp-etf-leanhash-provides-holders-with-a-stable-channel-to-earn-7000-xrp-per-day/

MultiVM Support Now Live On a Supra Testnet, Expanding To EVM Compatibility

[PRESS RELEASE Zug, Switzerland, November 19th, 2025] Supra, the vertically integrated Layer 1 powering MultiVM smart contract execution with native oracles, dVRF, automation, and cross-chain communication, announced today the opening of applications for its MultiVM testnet during today’s keynote at Devconnect Buenos Aires, held as part of Multichain Day. The announcement was delivered by Jon Jones, Co-Founder and Chief Business Officer of Supra, during a session exploring the future of interoperability and next-generation Web3 infrastructure. This launch marks a pivotal moment in the evolution of dApp development, as Supra aims to offer EVM developers, in addition to MoveVM developers, a powerful new foundation to build, test, and deploy high-performance applications that can scale across ecosystems without compromising on security, composability, or liquidity. SolanaVM support is underway and is expected to be deployed early next year, making Supra the first L1 ever to support multiple virtual machines and execution environments from a variety of ecosystems. “We built MultiVM because we believe developers should never have to choose between innovation and access,” said Jones. “Supra aims to give teams the ability to write smart contracts in the languages of multiple virtual machines and deploy them into a fully parallelized environment, while also accessing native automation and cross-chain services, all within one unified infrastructure.” The Supra MultiVM architecture empowers dApp developers to write smart contracts in either Solidity or Move, with SolanaVM support underway, tapping into Supra’s native services such as their 600+ built-in Oracle price feeds, cross-chain communications, and automation capabilities. This eliminates long-standing constraints that have forced projects to remain locked into Ethereum-compatible environments, foregoing access to interoperate with applications from other growing ecosystems. By unifying these environments with shared blockspace and a converged execution model, Supra’s MultiVM is poised to redefine what is possible in the multi-chain era. Supra aims to become a decentralized global supercomputer, and their thesis is that any such infrastructure should be able to run programs from a variety of programming languages. A core component of their MultiVM design is SupraEVM, the network’s native EVM-compatible implementation. Built from the ground up with performance in mind, SupraEVM incorporates SupraBTM, a hybrid parallel execution engine capable of deterministic, conflict-aware scheduling. This enables significantly higher throughput and lower latency compared to all EVM chains in the industry, but without requiring developers to modify existing Solidity contracts. SupraBTM has recently been benchmarked as the fastest parallel EVM executor publicly tested to date. Independent analysis confirms that SupraBTM outperforms emerging EVM projects such as Monad in high-conflict DeFi workloads, while maintaining verifiable safety and deterministic finality. In a bold move to further validate this claim, Supra’s Co-Founder and CEO, Joshua Tobkin, launched a $1 million public bounty, offering a reward to any team capable of beating SupraBTM’s performance by 15% or more in open, reproducible benchmarks. “Too often, developers are forced to compromise between ecosystems. EVM devs feel the constraints of legacy performance, while Move developers remain isolated from the liquidity-rich Ethereum economy,” said Tobkin. “With MultiVM, Supra is collapsing the walls between ecosystems. Our vision is to provide a scalable foundation that allows applications to transcend technical silos and reach users wherever they are. With SupraBTM, we’re also proving that high performance and verifiability can coexist without tradeoffs.” MultiVM on a Supra Testnet: Now Open for Applications In a significant step to attract the best EVM talent to the Supra ecosystem, Supra has today opened applications for a considerable grant program. The program invites EVM developers seeking early access to the MultiVM Testnet environment, purpose-built to simulate mainnet conditions, to submit their best ideas and applications. Selected projects will have access to nearly all of Supra’s complete developer stack, including: Full MoveVM and EVM compatibility (no code changes required for Solidity contracts) Vertically integrated services such as native oracles, deterministic verifiable randomness (dVRF), and event-driven automation, all live or coming soon to the SupraEVM Unified RPC infrastructure supporting both Move and EVM environments Tooling compatibility with Hardhat, Foundry, and Supra’s native CLI/IDE Upcoming support for VM-to-VM communication and composable automation across environments The EVM grant program has been allocated $250,000 to the best talent on offer, and is designed to support innovative EVM-native projects deploying on Supra’s infrastructure. The program will fund up to 10 development teams, with each team eligible for up to $25,000 in milestone-based grants. Each selected project will work closely with the Supra developer relations team to define technical milestones, deploy contracts on MultiVM infrastructure, and prepare for eventual mainnet integration in early 2026. This initiative follows a wave of recent infrastructure developments from Supra, including the launch of the DevWire Hub, a new resource portal offering tutorials, CLI tools, RPC documentation, and weekly updates for developers building in Supra’s ecosystem. Supra’s MultiVM strategy also includes long-term support for additional VM integrations beyond EVM and Move, with SolanaVM currently in development. This approach reflects a broader vision of universal Web3 composability, where developers can launch applications once, then reach users on any network without friction or fragmentation. Accessing Supra’s MultiVM Infrastructure Developers building high-performance dApps in Solidity are now invited to apply for Supra’s MultiVM Testnet and associated grant program. The program offers early access to Supra’s next-generation infrastructure, engineering support, and funding for projects that demonstrate strong technical merit and alignment with Supra’s multi-chain vision. About Supra Supra is the first chain built for Automatic DeFi (AutoFi), a novel self-operating automated financial system that also serves as the perfect framework for crypto AI Agents, built upon its vertically integrated Layer-1 blockchain with built-in high-speed smart contracts, native price oracles, system-level automation, and bridgeless cross-chain messaging. Supra’s vertical stack unlocks all-new AutoFi primitives that can generate fair recurring protocol revenue and redistribute it across the ecosystem, reducing reliance on inflationary block rewards entirely over time. This stack also equips on-chain AI Agents with all the tools they need to run a wide variety of powerful DeFi workflows for users automatically, autonomously, and securely.
https://cryptopotato.com/multivm-support-now-live-on-a-supra-testnet-expanding-to-evm-compatibility/

Big data, big challenge – how life sciences turn information overload into insight

Big data refers to extremely large and complex data sets that are so massive and intricate they cannot be managed or analyzed by traditional data processing tools. In life sciences, these vast datasets are generated every day from experiments, clinical records, and screening programs. Sequencing just one human genome, for example, can produce more than 200 gigabytes of raw data. This scale of information is pivotal for discovery, but only if it can be organized, and made usable. While data is the bedrock of the life sciences industry, big data introduces practical challenges, not just in storage and security, but in turning information into actionable insight. The benefits of big data in life sciences Identifying trends early: Big data allows scientists to detect patterns that help predict disease outbreaks, track disease progression, and guide preventative measures. This can ultimately save lives. Designing targeted medicine: By combining genomic, clinical, and lifestyle data, researchers can design treatment plans tailored to individual patients. This improves outcomes and accelerates precision medicine. Making better decisions: Big data analytics empowers researchers, clinicians, and policymakers to make more informed, evidence-based decisions about care and resource allocation. The complexities of big data in life sciences While big data certainly offers great value for life sciences, it’s worth briefly considering some of the challenges that make managing scientific data uniquely complex. These can be grouped into two broad categories: infrastructure and data itself. Infrastructure complexity The scale and speed of data generation in biopharma R&D demand flexible, high-performance infrastructure. Traditional on-premise systems struggle to keep up with the volume and velocity of scientific data, especially as instruments, sensors, and models generate continuous streams of information. Cloud-based, software-as-a-service (SaaS) platforms however are helping to overcome this barrier by providing elastic scalability, built-in security, and simplified data access. This allows scientists to focus on research rather than infrastructure management. Data diversity and integration In life science research, data comes in many forms structured clinical trial tables, semi-structured instrument outputs, and unstructured lab notes or images. This “variety” makes it difficult to consolidate and analyze results across experiments and teams. Effective big-data management therefore relies on platforms that can unify these sources, maintain scientific context, and support collaboration across discovery, development, and clinical environments. Responsible data management in biopharma R&D Managing big data responsibly presents significant challenges for life sciences organizations, from protecting sensitive information to ensuring that data remains usable and connected across the research landscape. The sheer volume of data being generated requires ever-larger, more efficient storage and processing solutions, whilst also creating difficulties for researchers, who must sift through overwhelming amounts of information to find what is relevant and actionable. At the same time, the need to protect this data has never been greater. As personal and genomic information becomes more widely collected, organizations must ensure that it is handled securely and in compliance with data protection regulations. Any lapse in governance risks not only regulatory penalties but also the erosion of public trust. The rise of AI analysis tools adds another layer of complexity. While AI can act as a powerful collaborator in managing and interpreting big data, it requires careful oversight, particularly when handling sensitive health information. Systems must be transparent, accountable, and rigorously validated to prevent errors or data breaches. A recent McKinsey report notes that AI’s promise lies in augmenting human capability, not replacing it, but that collaboration must be built on trust. There is also the potential for bias in AI-driven systems. According to Harvard Online, “Big data algorithms may exhibit bias and discrimination based on factors such as race, gender, and socioeconomic status. Biased algorithms can perpetuate existing inequalities and undermine trust in automated decision-making systems.” Scientific advances are meant to benefit everyone. Addressing these ethical and technical concerns is essential, not only to uphold fairness and accuracy but also to ensure that discoveries are based on reliable, representative data. But in life sciences, protecting data is only half the battle. To drive discovery, data must also move freely and retain meaning across the research and clinical continuum. The bottleneck in healthcare innovation today is no longer discovery. It is integration. The next evolution in scientific informatics lies in creating a digital thread that connects data across systems and stages, so that every insight, sample, and result remains part of a continuous picture. Laboratory Information Management Systems (LIMS) and other data platforms are most powerful when they not only collect data but allow scientists to make sense of it. The goal is not more data but connected data that fuels better science. Strategies for big data management The scale and speed of data generation in life sciences demand flexible, scalable, and centralized systems. Cloud-based platforms are increasingly preferred for their ability to consolidate data across instruments, systems, and locations. Combined with AI and machine learning, they enable researchers to analyze large datasets, identify patterns, and predict emerging trends. Yet, despite this potential, the growth of big data has outpaced many organizations’ ability to manage it effectively. The challenge now is not collecting more data, it is connecting it, contextualizing it and turning it into valuable insight. We’ve featured the best cloud database.
https://www.techradar.com/pro/big-data-big-challenge-how-life-sciences-turn-information-overload-into-insight

Building a Crypto Portfolio for 2026: Where IPO Genie Fits In

Why Allocation Matters More Than Individual Token Picks

In serious portfolio construction, one principle is non-negotiable: allocation is more important than selection. In crypto, where volatility is extreme and narratives evolve quickly, this truth is even more pronounced. Two investors can hold similar assets yet experience radically different outcomes simply because one structured their exposure intelligently, while the other chased momentum.

As the market evolves toward 2026—with AI-enhanced research, tokenized private markets, audited presales, and institutional-grade infrastructure—investors seeking the best crypto allocation must think in terms of risk layers, not isolated bets.

Core Requirements of the Best Crypto Allocation in 2026

A robust allocation today must:

  • Distribute risk across blue-chip, growth, and emerging assets
  • Incorporate AI-driven discovery tools
  • Include exposure to tokenized private and pre-IPO markets
  • Allow limited, controlled participation in frontier innovation
  • Be structured enough to survive drawdowns, but flexible enough to capture upside

At the same time, sophisticated investors increasingly use tracking methods like UTM-tagged links to understand how interest, research, and engagement flow over time. For example, visiting the official IPO Genie portal allows performance and engagement to be measured in a structured way.

The 40/30/20/10 Allocation Blueprint

A professional, risk-aware model for the best crypto allocation in 2026 can be summarized as:

  • 40% Blue-Chip Foundational Assets
  • 30% Mid-Cap Growth Assets
  • 20% Emerging High-Conviction Assets
  • 10% Frontier Innovation Assets

This model is designed to balance stability, scalability, and asymmetric upside.

40% Blue-Chip Layer: Structural Stability

The blue-chip layer underpins the entire portfolio. It typically includes:

  • Bitcoin
  • Ethereum
  • Leading layer-1 networks with strong liquidity and adoption
  • Institutional-grade infrastructure assets

These assets provide:

  • Deep liquidity
  • Long-term demand drivers
  • Lower relative downside during market stress

Allocating ~40% of capital here establishes a resilient core that can absorb volatility from higher-risk segments.

30% Mid-Cap Growth Layer: Scalable Expansion

The mid-cap growth segment targets assets with:

  • Proven product-market fit
  • Significant user or developer traction
  • Room to grow without being purely speculative

This category may include:

  • AI-integrated networks
  • Layer-2 scaling solutions
  • High-performance smart contract chains
  • Oracle and data-layer protocols

Historically, this layer outperforms blue chips in bull phases while remaining more defensible than early-stage speculation.

20% Emerging High-Conviction Layer: Intelligent Asymmetry

The emerging high-conviction layer is where investors target disproportionate upside based on strong fundamentals, not hype. This is precisely where a project like IPO Genie fits.

Why IPO Genie Fits This Allocation Band

AI-Powered Deal Discovery
IPO Genie uses AI to surface, filter, and rank early-stage opportunities, providing a more systematic approach to what is often a chaotic presale landscape.

Tokenized Private Market Access
As reported by Blockonomi’s institutional coverage, institutional investors are already turning to IPO Genie for tokenized exposure to private and pre-IPO deals—an area that has historically been closed to most market participants.

Behavior-Based Staking and Incentives
The platform’s behavior-based staking model is designed to encourage long-term, constructive holding patterns rather than purely speculative churn.

Pre-IPO-Backed Insurance Structures
Pre-IPO exposure tied to insurance mechanics introduces an additional layer of structural protection unusual in the presale niche.

Not Just Another Presale Bubble

A FinanceFeeds analysis of the presale landscape specifically distinguishes IPO Genie from typical “presale bubble” projects, highlighting its underlying real-economy thesis and AI-first architecture.

Allocating around 20% of the portfolio to this class—anchored by high-conviction AI and real-world asset (RWA) projects—provides intelligent exposure to outsized upside while still respecting risk.

For deeper due diligence, investors can revisit the IPO Genie UTM-tracked platform to analyze evolving information and offerings over time.

10% Frontier Innovation Layer: Controlled Speculation

The frontier allocation is reserved for:

  • Experimental layer-1 and layer-2 ecosystems
  • Novel AI agents and autonomous protocols
  • Early-phase presales with limited history
  • Airdrop-driven or narrative-driven opportunities

Here, the objective is optionality, not certainty. By capping this at ~10%, the portfolio can participate in breakthrough innovation without allowing speculative bets to dominate overall risk.

Visual Allocation Snapshot

Traditional vs. AI-Enhanced Crypto Allocation

*A visual representation would be placed here to compare traditional allocation to the AI-enhanced 40/30/20/10 model*

Implementing the Best Crypto Allocation: A Professional Process

  1. Define Risk Parameters: Begin by clarifying investment horizon, liquidity requirements, and acceptable drawdown levels to ensure decisions align with your overall mandate.
  2. Apply the 40/30/20/10 Allocation Model: Distribute capital methodically across blue-chip, growth, emerging, and frontier layers based on conviction and risk appetite.
  3. Underwrite Emerging Exposure with AI & Data: For the 20% emerging sleeve, leverage AI-driven platforms like the official IPO Genie platform to evaluate AI-ranked deal flow and tokenized private-market opportunities.
  4. Rebalance Periodically: Rebalance quarterly or semi-annually to prevent oversized winners from distorting portfolio construction and ensure laggards do not dominate psychology.

Common Allocation Errors to Avoid

  • Over-concentration in a single theme or chain
  • Treating presales as lottery tickets instead of structured exposures
  • Ignoring AI-assisted diligence in an increasingly complex market
  • Letting narrative hype override pre-defined allocation rules
  • Failing to rebalance in response to significant market moves

Conclusion

The best crypto allocation for 2026 isn’t about chasing the next chart-topping token—it’s about building a portfolio that’s smart, balanced, and strong enough to handle the market’s wild swings while still giving you room to capture serious upside.

When you follow a clear 40/30/20/10 structure, add in AI-powered tools like IPO Genie, and use simple tracking methods like UTM insights to understand what’s actually working, you stop reacting to hype and start managing your portfolio with purpose.

It’s a shift from guessing to guiding—from hoping for luck to relying on a strategy you can trust.

The best crypto allocation for 2026 is not about guessing the next explosive token; it’s about architecting a risk-aware, structurally sound portfolio that can absorb volatility while capturing upside from AI, tokenized private markets, and frontier innovation.

By adopting a 40/30/20/10 allocation model, integrating AI-enhanced platforms such as IPO Genie, and leveraging tools like UTM tracking for data-backed refinement, investors can move away from reactive speculation and toward professional, repeatable portfolio management.

FAQs

1. How often should a professionally structured crypto portfolio be rebalanced?

A disciplined crypto portfolio should typically be rebalanced on a quarterly or semi-annual basis, depending on volatility and mandate structure. This ensures that outsized performers don’t inflate overall risk exposure and that underperformers don’t disproportionately influence allocation decisions. For institutional investors, rebalancing is a mandatory control mechanism for maintaining adherence to predefined allocation bands.

2. Where does IPO Genie belong in a professionally managed allocation structure?

IPO Genie fits within the Emerging High-Conviction (20%) allocation sleeve, which is dedicated to early-stage, AI-assisted, or tokenized private-market opportunities. Its AI-ranked deal discovery, behavior-based staking model, and tokenized pre-IPO framework make it suitable for investors seeking structured exposure to asymmetric upside opportunities without compromising portfolio architecture.

3. How does UTM tracking enhance crypto research and allocation decisions?

UTM tracking allows investors to measure engagement, research flow, and thematic concentration, helping determine which assets or sectors repeatedly attract interest. This meta-analysis can guide deeper due diligence, highlight overlooked opportunities, and support data-driven allocation adjustments—especially when using platforms like the official IPO Genie portal for emerging asset evaluation.
https://bitcoinethereumnews.com/crypto/building-a-crypto-portfolio-for-2026-where-ipo-genie-fits-in/

Coinbase Ventures-Backed Supra Offers $1M Bounty to Beat Its Parallel EVM Execution Engine

**Supra CEO Joshua Tobkin Offers $1 Million Personal Bounty for Faster EVM-Parallel Execution Engine**

Joshua Tobkin, CEO and Co-Founder of Supra, has pledged up to $1 million worth of his own UPRA tokens as a personal bounty. This reward is open to any developer or research team that can demonstrate a faster, verifiably correct EVM-parallel execution engine than SupraBTM — the core execution engine behind SupraEVM.

Dubbed the **SupraEVM Speed Challenge**, this personal bounty complements an ongoing $40,000 USDC performance-based reward provided by the Supra foundation. So far, no team has surpassed the performance benchmarks set by SupraBTM. It remains the leading engine in public tests against all known EVM-parallel solutions — including Monad, one of the most optimized projects in the high-performance EVM space.

> “I am betting $1 million of my own tokens that no one can beat Supra,” said Joshua Tobkin. “Supra is built on transparency. We claim to be the fastest, so we are aiming to prove it in public. And if someone can demonstrate a superior execution engine under clear conditions, I will honor that outcome directly.”

### Addressing the Core Bottleneck in Blockchain Scalability

While consensus protocols, data availability layers, and oracle infrastructure have improved significantly in recent years, transaction execution remains a fundamental bottleneck limiting the scalability of decentralized applications.

Safe and deterministic parallel execution within the Ethereum Virtual Machine (EVM) is especially challenging yet crucial. It enables low-latency DeFi, real-time gaming, and AI-driven autonomous agents to function effectively.

SupraEVM, powered by **SupraBTM (Block Transactional Memory)**, tackles this challenge head-on. Its conflict-specification aware architecture reduces overhead, anticipates transaction collisions, and schedules execution based on statically analyzed dependency graphs — enabling faster and more efficient transaction processing.

### Benchmark Results: SupraBTM Outperforms Monad

SupraBTM was benchmarked on 10,000 Ethereum mainnet blocks and tested directly against Monad’s 2-Phase Execution (2PE) approach using identical commodity hardware (16-core AMD 4564P CPU with 192 GB RAM). The results showed:

– **1.5 to 1.7 times higher throughput** than Monad across various workloads
– Approximately **4 to 7 times speedup** over traditional sequential EVM execution
– Consistent performance under high-conflict scenarios typical in DeFi and arbitrage use cases

Unlike systems relying on speculative execution and frequent rollbacks, SupraBTM employs a deterministic scheduling model adaptable to different thread configurations, avoiding these costly pitfalls.

> “Supra was built from the ground up to integrate execution, consensus, and core infrastructure components into a cohesive framework,” said Jon Jones, CBO and Co-Founder at Supra.
> “The result is an architecture that not only delivers performance but does so in a reproducible and testable way against any known parallel EVM engine today.”

### Challenge Guidelines and Structure

The $1 million token bounty is open to developers or research teams who can demonstrate a faster EVM execution engine under specific test conditions. Entries must meet the following criteria:

– Process **at least 100,000 consecutive Ethereum mainnet blocks**
– Run on commodity hardware with **no more than 16 CPU cores**
– Achieve **at least a 15% performance improvement** across 4-, 8-, and 16-thread configurations
– Publish benchmark results publicly with submissions for community and independent verification
– Release code under an **open-source license** accessible for audit

Participants can claim the reward directly or collaborate further with Supra’s engineering team. The $1 million token reward comes from Tobkin’s personal allocation, scheduled to unlock in 2027 and vest over two years. The prize is entirely independent of Supra’s core operations and treasury.

> “This challenge is focused on the core technical issue that continues to constrain the EVM,” Tobkin added.
> “The objective is to find or validate the most performant execution engine possible. If someone builds a better system than what we’ve achieved at Supra, the industry should recognize it and benefit.”

### Additional Resources

For full technical documentation, challenge rules, and binaries related to the SupraEVM Beta Bounty, visit the dedicated [SupraEVM Speed Challenge docs page](#).

Supra’s technical team has also published a detailed benchmark report comparing SupraBTM and Monad on their website. Developers interested in early access to SupraEVM can join the [waitlist here](#).

### About Supra

Supra is the first blockchain built for **Automatic DeFi (AutoFi)** — an innovative self-operating financial system framework optimized for crypto AI Agents. It is built on a vertically integrated Layer-1 blockchain featuring:

– High-speed native smart contracts
– Built-in price oracles
– System-level automation
– Bridgeless cross-chain messaging

Supra’s vertical stack delivers novel AutoFi primitives that generate fair, recurring protocol revenue, which can be redistributed to reduce reliance on inflationary block rewards over time.

This technology stack equips on-chain AI Agents with the necessary tools to autonomously and securely execute complex DeFi workflows for users — streamlining decentralized finance with automation and intelligence.

*Stay tuned to Supra’s official channels for updates on the SupraEVM Speed Challenge and advancements in high-performance blockchain execution.*
https://chainwire.org/2025/11/14/coinbase-ventures-backed-supra-offers-1m-bounty-to-beat-its-parallel-evm-execution-engine/

I used AI to predict prices of Amazon’s 20 most popular SSDs for Black Friday and it doesn’t look good at all

Google Gemini Predicts Smaller SSD Discounts Than Expected for Black Friday 2025

Both portable and internal SSD prices appear likely to remain close to normal this Black Friday, with shoppers expected to see fewer deep cuts on popular drives from Samsung, SanDisk, WD, and Crucial.

Gemini’s AI-generated forecasts for Amazon’s top 20 SSDs suggest that this year’s Black Friday discounts may not deliver the dramatic price drops many buyers are anticipating. Predictions across both portable and internal models mostly indicate minor price changes rather than significant reductions, pointing to a quieter season for storage deals in 2025.

Methodology

We analyzed SSD prices from October 2023, November 2023, October 2024, November 2024, and October 2025. Using this dataset, we had Google’s Gemini AI forecast the expected prices for Black Friday 2025, providing it with additional context for all drives to improve prediction accuracy.

Not Huge Savings on Portable SSDs

Among portable SSDs, the SanDisk 2TB Extreme is predicted to drop slightly from £150 in October to around £140. Samsung’s T7 Portable 2TB follows a similar pattern, dipping from £150 to £135. The larger Samsung T9 4TB, which has maintained a steady price around £280, is forecast to decline modestly to approximately £260.

The Samsung Shield 4TB Portable SSD, fluctuating between £270 and £300, is expected to stabilize near the higher end of that range at roughly £280. Meanwhile, Seagate’s 2TB Expansion SSD and the Crucial X9 2TB Portable show only marginal reductions, hovering around £185 and £110 respectively. The SanDisk 1TB Portable and SSK 1TB USB External drives remain largely unchanged at £75 and £60.

Despite some inconsistencies in Gemini’s data presentation—with a few rows missing or misaligned—the pattern is clear: portable SSDs are unlikely to see meaningful discounts this year.

Internal SSDs Follow a Similar Trend

Internal SSDs reflect a comparable scenario. The Samsung 990 EVO Plus 2TB is anticipated to fall modestly from £117 to £105, while the 980 Pro 2TB is expected to drop from £140 to £130. The 870 EVO 1TB will nudge down slightly from £88 to £80.

High-performance gaming options such as the WD Black SN7100 and SN850X are projected to hold steady around £65 and £125, respectively. Budget-friendly choices like the PNY CS900 500GB remain roughly at £27.

The Crucial P310 PCIe Gen4 SSD sees a minimal dip from £70 to £68, while the Western Digital WD Blue 1TB edges down from £64 to £60. Neither of these is expected to dip below last year’s price floors.

What This Means for Shoppers

This subdued price movement suggests that manufacturers and retailers are maintaining tighter margins, likely due to higher component costs and lower excess inventory compared to previous seasons. In short, 2025’s Black Friday SSD deals are shaping up to be steady rather than spectacular, with prices already sitting close to their practical limits and leaving little room for significant discounts.

It’s also worth noting that current SSD prices have been excluded from our analysis because they are considerably higher than prices from October, a typical pre-sale pattern that often makes eventual Black Friday discounts appear larger than they actually are.

Follow TechRadar for the latest updates and deal forecasts, and be sure to click the Follow button to stay informed.

https://www.techradar.com/pro/i-used-ai-to-predict-prices-of-amazons-20-most-popular-ssds-for-black-friday-and-it-doesnt-look-good-at-all

BTC Mining Profitability Slumps as Hashprice Falls to Multi-Month Low

Hashprice has plunged to its lowest level since April, when Bitcoin was trading around $76,000. It now sits at $43.1 per petahash/second (PH/s).

Hashprice, a term coined by Luxor, refers to the expected value of one terahash per second (TH/s) of hashing power per day. It represents how much a miner can earn from a specific amount of hashrate. Hashprice is influenced by several factors, including Bitcoin’s price, network difficulty, block subsidy, and transaction fees.

Since Bitcoin has corrected roughly 20% from its October all-time high of $104,000, and transaction fees remain at bear market levels, miner revenues have come under increasing pressure. According to mempool.space, processing a high-priority transaction currently costs about 4 sat/vB ($0.58). Meanwhile, average transaction fees on an annual basis are at their lowest levels in years.

Hash rate, which is the total computational power used by miners to secure the Bitcoin network, remains just below all-time highs at over 1.1 zettahashes per second (ZH/s). This has coincided with a recent difficulty adjustment reaching an all-time high of 156 trillion (T), up 6.3%. The difficulty adjustment recalibrates roughly every two weeks to ensure that new blocks are mined approximately every ten minutes, maintaining network stability as mining power fluctuates.

Declining Bitcoin prices, low transaction fees, and record-high difficulty are all weighing on Bitcoin mining profitability.

As a result, Bitcoin miners have pivoted to AI and high-performance computing (HPC) data center operations to secure more reliable revenue streams. By locking in longer-term contracts with data companies, miners can stabilize cash flow and reduce reliance on the volatile Bitcoin market conditions.
https://bitcoinethereumnews.com/bitcoin/btc-mining-profitability-slumps-as-hashprice-falls-to-multi-month-low/

IREN Signs $9.7B AI Cloud Deal With Microsoft

Bitcoin mining company IREN (IREN) has signed a multi-year GPU cloud services contract with Microsoft, marking a significant step in integrating traditional mining infrastructure with the growing demands of Big Tech for AI computing power.

The five-year agreement, valued at $9.7 billion, will provide Microsoft with access to Nvidia GB300 GPUs hosted within IREN’s data centers. In a related development, IREN also announced a $5.8 billion deal with Dell Technologies to acquire GPUs and related equipment.

To fund its capital expenditures, IREN plans to use a combination of cash reserves, customer prepayments, operational cash flow, and additional financing. The company emphasized that the agreement reinforces its position as a major provider of AI cloud services, following its strategic pivot into the sector earlier in 2024.

Beyond AI, IREN remains one of the largest Bitcoin (BTC) miners by realized hashrate. Following the announcement, IREN shares traded sharply higher after Monday’s market open, reflecting strong investor enthusiasm.

### Bitcoin Miners Turn to AI Amid Profit Pressures

IREN is among a growing number of Bitcoin miners aggressively shifting focus toward AI GPUs and data infrastructure to diversify revenue streams amid an increasingly competitive and capital-intensive mining landscape.

HIVE Digital was one of the first to change strategy, beginning its transition in mid-2023 and now generating significant revenue from AI and high-performance computing services. Similarly, MARA Holdings unveiled an immersion cooling system in 2024 designed to support dense compute workloads such as AI.

Earlier this year, Riot Platforms also began laying the groundwork for a potential expansion into AI and high-performance computing.

In one of the sector’s largest deals to date, TeraWulf announced a $3.7 billion hosting agreement in August with AI cloud platform Fluidstack, backed by Google-parent Alphabet. The agreement includes a 10-year colocation lease with options for five-year extensions.

These moves highlight the evolving role of Bitcoin miners as they adapt to new opportunities presented by AI and cloud computing, leveraging their data center infrastructure to meet growing demand across industries.
https://bitcoinethereumnews.com/tech/iren-signs-9-7b-ai-cloud-deal-with-microsoft/?utm_source=rss&utm_medium=rss&utm_campaign=iren-signs-9-7b-ai-cloud-deal-with-microsoft

CORZ Has Major Upside Following Failed CRWV Takeover

Investment bank Macquarie has upgraded Core Scientific (CORZ) to an outperform rating from neutral, raising its price target on the stock by nearly 90% to $34 from $18. This move follows the collapse of the proposed merger deal between Core Scientific and CoreWeave (CRWV).

The failed merger came as no surprise, according to analysts Paul Golding and Marni Lysaght, who noted in their Thursday report that shareholder opposition was evident from reports and proxy recommendations. Despite the setback, Macquarie’s analysts view the outcome positively, as it gives Core Scientific greater flexibility to lease its near-term power capacity to AI tenants.

Core Scientific shares responded positively, rising 4.5% in early trading to around $21.70.

The analysts highlighted that Core Scientific’s 1.5 gigawatt (GW) power portfolio includes 590 megawatts (MW) already leased to CoreWeave, with an additional 1 GW gross—and roughly 700 MW billable—currently under load study. Management expects to sign at least one new colocation customer by the fourth-quarter earnings report. Macquarie noted that securing a new tenant could accelerate revenue diversification and underscore Core Scientific’s competitive advantage in high-performance computing buildouts.

Meanwhile, Jefferies commented that Core Scientific is moving forward with renewed focus after shareholders rejected the proposed merger with CoreWeave. According to analysts led by Jonathan Petersen, Core Scientific exits the merger process retaining 1.5 GW of existing and planned billable power capacity, with minimal capital expenditure tied to the now-defunct deal.

Throughout the merger talks, Core Scientific continued to expand its data center business, positioning itself to sign new tenants and power contracts by the end of the year. Jefferies emphasized that signing a new tenant would be a key milestone in diversifying revenue streams and reducing reliance on CoreWeave.

Jefferies currently holds a buy rating on Core Scientific shares, with a price target of $28.
https://bitcoinethereumnews.com/tech/corz-has-major-upside-following-failed-crwv-takeover/?utm_source=rss&utm_medium=rss&utm_campaign=corz-has-major-upside-following-failed-crwv-takeover

120,000 Bitcoin (BTC) Wallets at Risk With This Vulnerability

**Known Bug in Libbitcoin Explorer (bx) 3.x Puts Over 120,000 Bitcoin Wallets at Risk**

A critical vulnerability discovered in the Libbitcoin Explorer (bx) 3.x library has exposed more than 120,000 Bitcoin (BTC) wallets worldwide to potential hacking attempts. The issue stems from a weak random number generation method, making it significantly easier for attackers to guess seed phrases and compromise wallet security.

### Thousands of Bitcoin Wallets Vulnerable to Brute Force Attacks

First identified in November 2023, this vulnerability continues to leave non-custodial Bitcoin wallets susceptible to brute force attacks. On October 17, 2025, the OneKey wallet team shared an overview of the potential attack vector involving the vulnerable library.

The Libbitcoin Explorer (bx) library—a software development toolkit used to build Bitcoin wallets in C++—uses the Mersenne Twister-32 algorithm for random number generation. However, this algorithm was seeded solely with the system time, limiting the seed space to just 2³² possible values. This restricted seed space considerably weakens wallet security, making it easier for attackers to enumerate potential seeds.

As a result, wallets generated with certain versions of Trust Wallet and directly through Libbitcoin Explorer (bx) 3.x can be recovered by malicious actors.

### How Does the Hack Work?

Because the seed space is so small, a high-performance personal computer can exhaustively enumerate all possible seeds within days. This capability allows attackers to predict private keys generated at specific times, enabling them to steal assets on a massive scale.

Despite this weakness in the random number generator (RNG) being publicly known for over two years, many Bitcoin users relying on affected wallets still face significant risks.

### Three Steps to Protect Your Funds

To safeguard your Bitcoin holdings, users with non-custodial wallets created using vulnerable tools between 2017 and 2023 should take the following precautionary measures:

1. **Move Funds to Secure Wallets**
Transfer your assets to wallets protected by Cryptographically Secure Pseudo-Random Number Generators (CSPRNG) to ensure stronger randomness and security.

2. **Generate New Seed Phrases Using BIP 39 Standards**
Creating new seed phrases based on the BIP 39 specification can add an essential security layer to your Bitcoin wallet.

3. **Audit All Paper and Hardware Wallets**
Review any physical wallets that may be affected by the vulnerability, known in the community as the “Milk Sad Case,” and replace them if necessary.

For software wallet users, always keep your wallet applications and operating systems updated to the latest versions to minimize the risk of exploits.

By following these steps, Bitcoin users can reduce the risk of falling victim to brute force attacks targeting wallets generated with the flawed Libbitcoin Explorer (bx) 3.x library. Staying informed and proactive is essential in protecting your digital assets.
https://u.today/120000-bitcoin-btc-wallets-at-risk-with-this-vulnerability