Turbotodd

Ruminations on tech, the digital media, and some golf thrown in for good measure.

Archive for the ‘storage’ Category

To Mars and Augusta and Back

leave a comment »

SXSW Interactive kicked into high gear over the weekend here in Austin. I won’t be in attendance this year, but Elon Musk made some news over the weekend by suggesting that SpaceX is on track to send his Mars-intended rocket on short trips by 2019.

The target for a cargo mission was 2022.

In a report  from CNBC, Musk also elaborated about what was needed to get things going on the Red Planet: “Mars will need Glass domes, a power station, and an assortment of basic living fundamentals.”

One would presume those fundamentals include at least 2-3 Tesla Model 3s.

‘Cause, you know, you gotta tool around Mars in style.

Back here on planet Earth, a longtime unicorn may very well finally be going public. Dropbox filed an updated IPO prospectus indicating it planned to sell 36 million shares between $16 and $18 a share, according to a report from The New York Times. 

The company is expected to start trading on the Nasdaq next week under the ticker symbol “DBX.”

Finally, golf fans everywhere were treated to a real treat at this weekend’s Valspar Championship.

Tiger Woods was back in contention at a PGA Tour event. He hadn’t won a tournament since 2013, and he still hasn’t. 

But his play on the tough Innisbruck Copperhead Course, home of the golf snakebite, was in top form and he ended up tied for second. Englishman Paul Casey won the tournament, garnering only his second win on the PGA Tour and his first since 2009. Congrats!

So, there were cinderella redux stories all around, and The Masters is only a few weeks away. Might Phil Mickelson and Tiger Woods be ready to set the pace once again at Augusta National?

A final round pairing of those two at The National?…well, a fella can always dream.

Written by turbotodd

March 12, 2018 at 9:42 am

IBM Transforms FlashSystem, Drives Down Cost of Data

leave a comment »

IBM today announced sweeping advances in its all-flash storage solutions and software to drive down the costs of data and extend its solutions for hybrid and private cloud environments.

Some of the changes and additions include:

  • New ultra-dense FlashSystem array capable of storing more data in the same footprint, contributes to lower data capacity costs by nearly 60 percent.
  • New Spectrum Virtualize software allows simplified migration and disaster recovery of data to and from the IBM Public Cloud;
  • New software enables IBM and non-IBM storage to be used with popular Docker and Kubernetes containers environments;
  • Cloud-based software beta program integrates storage with artificial intelligence and machine learning through new software to collect inventory and diagnostic information in order to help optimize the performance, capacity and health of clients’ storage infrastructure.

“Companies are seeking guidance in modernizing their data from being a passive cost center to being the central hub for their business. IBM understands that only those that extensively analyze and exploit their data will benefit from it,” said Ed Walsh, GM, IBM Storage and SDI. “To help clients make this transformation, we are introducing new all-flash solutions that will dramatically lower the cost of storage systems while making data availability – whether on-site or in the cloud – a central part of their business strategy.”

In addition to the aforementioned features, updates to IBM Storage systems and software include:

  • New Platform Speeds Private Cloud Deployments – IBM Spectrum Access solutions offer what storage admins users need to deploy a private cloud quickly and efficiently, delivering the economics and simplicity of the cloud with accessibility, virtualization and performance of an on-premises implementation;
  • Consumption-Based Pricing – new utility offering enables a consumption-based buying model for hybrid cloud environments leveraging most of the IBM storage and VersaStack portfolios for users preferring to buy storage as an operating expense;
  • Consolidated User Interface – new interface for FlashSystem 900 consolidates activity and performance information in a single dashboard. Consistent with user interfaces used in other IBM storage systems and IBM Spectrum Storage software, the UI simplifies operations and helps improve productivity;
  • VersaStack with FlashSystem – incorporating the newest FlashSystem being announced today an extensive refresh to the IBM/Cisco VersaStack converged infrastructure offerings;
  • Investment Protection – several of the new all-flash storage and VersaStack solutions announced today are NVMe ready, enabling them to take advantage of the NVMe offerings coming in 2018.

“With this announcement, IBM is demonstrating, among other things, how highly leveraged their FlashCore strategy is,” said Eric Burgener, Research Director for Storage at IDC.  “Next generation FlashCore enhancements, including higher density 3D TLC NAND-based media and hardware-assisted in-line compression and encryption, immediately improve the capabilities of multiple IBM All Flash Arrays by providing features that drive higher infrastructure density and improved security more cost-effectively.”

IBM leadership in storage systems and software is based upon more than 380 system patents, including IBM FlashCore technology and more than 700 patents for IBM Spectrum Storage software. As a result IBM’s flash arrays have been ranked as Leader in Gartner Magic Quadrant for Solid State Arrays for four years in a row and for the 3rd year in a row has been named the #1 Software-Defined Storage vendor by IDC.

The new features to IBM’s all-flash systems and IBM Spectrum Storage software will be available in Q4. Clients interested in participating in the IBM beta program for cognitive support can inquire by visiting ibm.biz/FoundationPilot.

For more information about IBM Flash Storage please visit: https://www.ibm.com/storage/flash.

Written by turbotodd

October 26, 2017 at 9:12 am

New And Smarter Systems

leave a comment »

Among its many features, the new POWER7+ microprocessor offers an expanded 2.5x L3 cache memory, greater security with faster file encryption for the IBM AIX operating system, and memory compression that results in no increased energy usage over previous generation POWER7 chips.

While President Obama and Republican presidential candidate Mitt Romney were out in the desert eating burritos, visiting dams, and doing debate prep, we at Big Blue were preparing for our own significant announcement, one that just by happenstance emerged on the big debate day.

But it’s one to pay attention to, as it bolsters IBM’s smarter computing initiative and paves the way for companies to establish a more aggressive posture in what we call “cognitive computing.”

First, the broad headline: We’ve bolstered our smarter computing initiative by introducing new Power systems, storage, and mainframe technologies.

Specifically, we’ve infused the Power systems family with the new POWER 7+ processor (see the process in the image to the left), which provides greater security and fast business analytics, capacity on demand, and significantly improved performance.

We’ve introduced massive new storage devices through the new high-end DS8870 storage systems that are three times faster than the previous models.

And recognizing the need for organizations to be more “data ready,” we’ve introduced the IBM DB2 Analytics Accelerator V3, which provides lightning fast analytics capabilities running on the recently introduced IBM zEnterprise EC12 mainframe.

The smarter computing initiative is aimed at solving the varied and intensifying challenges organizations are facing, from security vulnerabilities to managing ballooning data volumes that are expanding through social and mobile technologies.

You can learn more about IBM’s smarter computing initiative and these newly introduced technologies here. 

TurboTech: A Humorous Look At 2011 Technology Trends In Review

leave a comment »

It’s not many people who have the opportunity to be able to say that they’ve worked with a true broadcasting professional like Scott Laningham.

Blogger's Note: No dolphins were harmed during the making of this video. Green pigs who stole bird's eggs, well, that's a whole other story!

It’s even less people who would take the opportunity to actually come clean and admit to having done so, especially on more than one occasion.

Because I’m neither a true professional nor someone who likes to allow the skeletons in his closet to begin to accumulate, instead of facing as many of them as I can take head on like some egregious out-of-control episode of “Walking Dead,” or, worse, a full-on “Angry Birds” like assault come to life (but only if it’s the ad-supported version, as we’re too cheap to actually buy a copy), it is with great pleasure that I feature for you my readers the latest episode of “TurboTech,” another fine example supporting the postulation by Gartner and others that broadband video is here to stay…even if Scott and I are not destined to be ourselves.

The following is video documentary evidence of what happens when nature cannot simply abhor a vacuum, but instead must attempt to fill it with technology forecasting tripe at the end of another grand year of massive technological disruption.  In our case, the year 2011, which was filled with much technological wonder and wonderment, not the least of which included fabric-based computing.

It shall also not go unnoticed by somewhat regular (assuming there are any of you) viewers that Scott continues to look and sound much, much better than me in these episodes, indicating once again that Scott continues to have better technology than me.

This, too, must change.

Solid State, Solid Storage

leave a comment »

Solid state has evolved way beyond simply replacing vacuum tubes.

IBM today released the findings of a customer survey that demonstrates pent-up demand for solid state disk technology as a successor to flash and hard disk drives.

Customers are embracing high-performance solid-state disks to support growing data storage demands driven by cloud computing and analytics technologies.

More than half of the customer surveyed (57%) responded that their organization needs to develop a new storage approach to manage future growth. The survey of 250 U.S. IT professionals in decision-making positions was conducted by Zogby International in August 2011 on behalf of IBM.

The survey demonstrates a need for a new class of storage that can expand the market for solid-state drives (SSDs) by combining increased data delivery with lower costs and other benefits.

Nearly half (43 percent) of IT decision makers say they have plans to use SSD technology in the future or are already using it. Speeding delivery of data was the motivation behind 75 percent of respondents who plan to use or already use SSD technology. Those survey respondents who are not currently using SSD said cost was the reason (71 percent).

Anticipating these challenges years ago, IBM Research has been exploring storage-class memory, a new category of data storage and memory devices that can access data significantly faster than hard disk drives — at the same low cost.

Racetrack memory, a solid-state breakthrough technology, is a potential replacement for hard drives and successor to flash in handheld devices. A storage device with no moving parts, it uses the spin of electrons to access and move data to atomically precise locations on nanowires 1,000 times finer than a human hair.

This technique combines the high performance and reliability of flash with the low cost and high capacity of the hard-disk drive. It could allow electronic manufacturers to develop devices that store much more information — as much as a factor of 100 times greater — while using less energy than today’s designs. Racetrack memory is featured as one of IBM’s top 100 achievements as the company celebrates its Centennial this year.

These new storage technologies could also alleviate critical budget, power and space limitations facing IT administrators. Today, an average transaction-driven datacenter uses approximately 1,250 racks of storage, taking up 13,996 square feet and 16,343 kilowatts (kw) of power. By 2020, storage-class memory could enable the same amount of data to fit in one rack that takes up 11 square feet and 5.8 kws of power.

Following are further details from the survey:

  • Nearly half (43 percent) say they are concerned about managing Big Data;
  • About a third of all respondents (32 percent) say they either plan to switch to more cloud storage in the future or currently use cloud storage;
  • Nearly half (48 percent) say they plan on increasing storage investments in the area of virtualization, cloud (26 percent) and flash memory/solid state (24 percent) and analytics (22 percent); and
  • More than a third (38 percent) say their organization’s storage needs are growing primarily to drive business value from data. Adhering to government compliance and regulations that require organizations to store more data for longer — sometimes up to a decade — was also a leading factor (29 percent).

You can learn more about IBM Storage technologies here.  Also visit the blog from IBM storage expert and Master Inventor, Tony Pearson, who’s a longtime storage consultant and who writes on storage and storage networking hardware, software and services.

Written by turbotodd

September 21, 2011 at 3:53 pm

Singin’ In The Amazon Cloud

leave a comment »

If the sun doesn’t come back out in Austin soon, I’m going to have to move closer to the equator.

But for some, cloudy skies are just what the doctor ordered.

Amazon’s new Cloud Drive, Cloud Player for Web, and Cloud Player for Android was announced overnight and tees up some big guns pointed directly at Google and Apple in the online music marketplace.

According to the Amazon press release, “these services enable customers to securely store music in the cloud and play it on any Android phone, Android tablet, Mac or PC, wherever they are.

“Customers can easily upload their music library to Amazon Cloud Drive and can save any new Amazon MP3 purchases directly to their Amazon Cloud drive for free.”

Music in the clouds?  Or in too many Amazon executive’s heads?

Only time, and perhaps a few gazillion Amazonian music streams, will tell the tale.

The good news is, the streaming service from the Amazon cloud is free.

The bad news is, how do I get all those countless hours of my life back that I spent burning CDs into iTunes?

What do you mean, there’s no rebate for that??

Don’t pay any attention to me, I’m obviously biased (although I’ve never been a big fan of iTunes, either.  Come to think of it, I really just don’t like DRM!)

Engadget deconstructs the new service and explains that it works something like this: Existing Amazon customers in the US can upload their MP3 purchases from Amazon to their own 5GB cloud space (I’ve always wanted to have my own place in the clouds!).

This is then upgradable to a one-year 20GB plan for free upon purchasing an MP3 album, with additional plans then starting at $20 a year.

My two cents: It’s one heckuva lot easier to just subscribe to Slacker or Pandora for a year.

But maybe that’s just me: I gave up moving all those digital files around about the moment I figured out how I was spending way much more time moving music files around that I was actually listening to music.

But, I’m a forever Amazon customer, so I’ll give them the benefit of the doubt and see how this plays out.

Pun intended.

Written by turbotodd

March 29, 2011 at 2:59 pm

Superfast Analytics

leave a comment »

This week flew by.

As pretty much has this whole year.  November 19th, you say?

Speaking of speed, the Supercomputing 2010 conference has been going on down in the great city of New Orleans this week.

At the event, IBM earlier today unveiled details behind a new storage architecture design that will convert terabytes of pure information into actionable insights twice as fast as was previously possible.

This new capability is ideally suited for cloud computing apps and data-intensive workloads like digital media, data mining, financial analytics, and the new architecture is expected to shave hours off of complex computations without requiring heavy infrastructure investment.

Created at IBM Research Lab in Almaden, this new General Parallel File System-Shared Nothing Cluster (GPFS-SNC) architecture is designed to provide higher availability through advanced clustering technologies, dynamic file system management and advanced data replication techniques.

By “sharing nothing,” new levels of availability, performance and scaling are achievable. GPFS-SNC is a distributed computing architecture in which each node is self-sufficient; tasks are then divided up between these independent computers and no one waits on the other.

IBM’s current GPFS technology offering is the core technology for IBM’s High Performance Computing Systems, IBM’s Information Archive, IBM Scale-Out NAS (SONAS), and the IBM Smart Business Compute Cloud.

These research lab innovations enable future expansion of those offerings to further tackle tough big data problems.

As an example of how such a capability might be used in the “real” world, large financial institutions run complex algorithms to analyze risk based on petabytes of data.

With billions of files spread across multiple computing platforms and stored across the world, these mission-critical calculations require significant IT resource and cost because of their complexity.

Using this GPFS-SNC design, running this complex analytics workload could become much more efficient, as the design provides a common file system and namespace across disparate computing platforms, streamlining the process and reducing disk space.

You can learn more about the basic GPFS capability here.

Written by turbotodd

November 19, 2010 at 2:53 pm

%d bloggers like this: