NetApp Keystone

What NetApp Keystone is?

After this years NetApp’s Insight I’m getting questions what NetApp Keystone actually is.  The answer is quite straightforward  “The NetApp Keystone is a new consumption model where you can choose performance tier, pick a storage service (file, block, object) and you can choose if you will manage it by yourself or let NetApp take care of it”. Customers can order storage or combining a compute and storage. Operating model used is cloud/service-oriented, and you pay as you use. We are used to ordering cloud service and use it at cloud-providers locations, but with NetApp Keystone this is also changed we can order capacity and use it on-prem but with cloud consumption billing model and flat, predictable pricing. 

But then a new question arises how NetApp can offer a consumption model without a risk?

NetApp has already implemented in products a working efficiency, performance and availability guarantee and with ActiveIQ can preform smart health checks and provide AI-powered insights, which can give a NetApp and customers very exactly information what will their consumption be in next months, years.

The last question usually is what will I get when I order NetApp Keystone services.

So if you will order capacity to be located on-premise, you will get storage from All-Flash FAS series from AFF A220 to AFF A800, which model will you gets depends upon your requirements. If you want to order both compute and storage, they will deliver a NetApp HCI, which will be configured upon customers requirements and nodes will be added as needed. NetApp Keystone services based on cloud services located at cloud providers are well-known cloud volume services Azure NetApp Files, Cloud Volumes for AWS and Cloud Volumes for Google Cloud Platform and multi-cloud services from Cloud Volumes ONTAP, Cloud Tiering to NetApp Kubernetes Service, Cloud Backup Service.

In general, NetApp Keystone is a service-oriented product build-up from the most popular NetApp product.

My memories of NetApp Insight Europe 2018

When I was driving home from the airport on Thursday by myself alone in the car, there was a right moment to think about what was the best thing – the moment of this year’s Insight. There were many things, but finally, I decided to choose two things – social one and technology one.

Social one happened on a day when I arrived in Barcelona. I arrived at Barcelona Airport on Sunday late afternoon and hurried to my hotel and change my shirt. Guys from NetApp United were already waiting for me in the bar so that we went together to Soho Hotel Barcelona where we had reception and dinner with movie screening afterward. This dinner was no regular dinner. First, all NetApp A-Team members attending NetApp Insight EMEA were there, second, all NetApp United attending NetApp Insight where are also there. Both these groups gather people with a passion for technology especially NetApp’s and there are a lot of exciting talks happening between them. When Dave Hitz founder of NetApp and other executives came into room excitement and energy raised to a high level, these guys are creating NetApp vision, and their energy is incredible. I was feeling very proud to be in the same room with them. When we finished with dinner we moved to a private cinema in the basement of the hotel and watched a very funny movie Deadpool 2.

   Every insight I am hardly waiting for keynotes. This year the best one for me was on Wednesday. As every keynote on NetApp Insight started
with NetApp A-Team members also on this one, NetApp A-team members shared their thoughts about data. Tuesday’s keynote talked how about a need to be data-driven and Wednesday’s was showing us how NetApp’s data fabric strategy is making it possible. Kate Russel, this year’s moderator, started with Henri Richard, Executive Vice President, Worldwide Field, and Customer Operations, who pointed out how companies all over the world have their business strategy, but they lack data strategy because data is becoming a business, data strategy is also essential.  Scott Dawkins – NetApp CTO and Jeff Wike, DreamWorks Animation CTO continued to talk about data and technology challenges when creating a movie like How to train a dragon which comes out in February 2019.  500-600 artist are creating over 4-5 movies a year like that. Every single thing is created on a computer and saved on hard drives – Half billion files and 120 million core hours of rendering. Imagine how many hard drives would they need if they would not use NetApp efficient technology. Their strategy is that every workload should be run on-premise or in the cloud with an artist does not know that. Managing on-premise and cloud infrastructure seamlessly is one of the more important requirements.

   Anthony Lye showed NetApp Data fabric strategy at work. Anthony showed how quickly and easily NetApp Cloud Data Services can be deployed and started to be used. NetApp Cloud Data Services can achieve very high speed like SAP HANA can be delivered on Azure NetApp Files with 100k IOS@Sub-Milliseconds latency or file services with 1200 MB/s @ < 2.0 msec or Database & ERP with 310K IOPS @ 1.5msec. These speeds are even high than we can achieve at on-premise and also because we don’t need this kind of performance all the time. So why we would need to buy and pay for equipment for on-premise systems and have it in idle for most of the time, if we could just with a few clicks configure, manage and use it in public cloud. One of biggest challenges is also how to monitor and optimize infrastructure on-premise and on cloud services in the public cloud. James Holden presented Cloud Insights NetApp’s SaaS solution. Cloud Insight can monitor complete NetApp portfolio as FAS, AFF, and HCI and also cloud data services like NetApp Azure Files, Google Data Volumes and many other third-party devices and cloud services. Already prepared dashboards to give you visibility on your environment and possibility of optimization of the environment is just a few seconds when data flow into the system. Building a model how the device interacts with infrastructure is a significant advantage on other monitoring tools. Zooming down into key metric gives you the option to find issues as soon and quickly. 

   Showing Data Fabric in action was amazing. Kim moved 420 TB of automatically discovered inactive data in datacenter into object bucket located in a cloud or on-premise just with few clicks, and we can see how much money we saved with that. Numbers are unbelievable 710,220 $ savings with 420TB moved to cloud.

    With AFF speeds increased a lot, but with Max Data speeds went over an imagining level. Now with MAX Data, we could get up 11x performance increase without application rewrite. MAX Data is application server-resident software that allows you to created filesystem that spans persistent memory and external flash that you can host environments that are much larger than you can host in persistent memory. MAX Data can be configured in a few seconds, and you can start to use a system with latency below 0.04 ms and up to 530k IOPS.

  Last but not least Dave Hitz came to the stage and finished keynote with his unique style. Dave shares us a story about cloud first customers. The story was about a company which was selling alethic equipment thru big stores and realized that they need to change their business model to sell running shoes over the internet, so they did go thru digital transformation and create nice looking website, apps for the phone. So most of their business went thru digital channels. They build complete infrastructure on-premise, and their load was most of a time about 30% of their capabilities, but a couple of time a year their load went up to 200%. First options for them was to upgrade data center, but they realized that moving their data to the cloud could be cheaper and they will save much money because they don’t need 200% performance all the time. When we are at this kind of dilemma, we should ask ourselves two questions – Could it? Should it?

    In the end, Dave showed us that he still got it and showed us a command line demo how to create cloud volume in azure files and showed us how NFS as Service and CIFS as service feels cloudy…

A month after NetApp Insight finished, I am still full of good memories and still sure that NetApp Insight is the only IT conference where you can feel and touch what future in IT will be. For sure is NetApp creating a good foundation for future with NetApp Cloud portfolio, NetApp AI and NetApp MAX data.

NetApp ONTAP 9.5 – Feature update

As NetApp Insight in the USA is in his last strokes NetApp has released a new feature update version of number one storage operations system NetApp ONTAP 9.5. In this release, there are some very interesting new features introduced

I have waited for a long time replication mechanism which provides zero data recovery with RPO equal zero and fast recovery with very low RTO build on SnapMirror. A new feature is called SnapMirror synchronous (SM-S). NetApp also used a new way of licensing with this highly useful feature. SnapMirror synchronous (SM-S) license must be purchased for primary cluster and is priced per size of volumes that are synchronously replicated. Traditional SnapMirror license must be purchased along. Currently, only most used protocols FC, iSCSI and NFSv3 are supported. Two modes of synchronous replication are available. A strict mode which should be used to replicate transaction logs from applications(Exchange, SQL) and normal mode used for usual data. For successful synchronous replication round-trip network latency must not exceed 10ms and number of synchronously-replicated volumes per-node limit must not be larger than 80 volumes for AFF and 40 volumes for FAS. SnapLock Compliance now supports SnapMirror XDP logical replication with storage efficiency mechanisms turn on.
Enhancements for Metrocluster over IP were also made with ONTAP 9.5. Support is now extended to up to 4 nodes of AFF A300 with ADP or FAS8200 without ADP. Supported distance is now extended up to 700km which is a really long distance.

New-old performance feature returned from 7-mode systems known as FlexCache. FlexCache feature in ONTAP 9.5 is located within clusters. Cache volumes are sparsely-populated within a cluster(intra-cluster) or across multiple clusters (inter-cluster). FlexCache volumes only cache only »hot« blocks of user data and metadata. Main benefits using flash cache are lower read latency at remote locations, increased collaboration, and productivity across multiple locations. Flex cache is used on NFSv3 volume and for typical use cases: ASIC electronic design automation (EDA) and media and computer-generated imagery (CGI) rendering.

 

NVMe Over Fiber Channel (NVMe/FC) multi-path fail-over with Asynchronous Namespace Access(ANA) is possible. ANA can be compared to ALUA in FCP world. ANA is another technology from NetApp which getting accepted as industry standard. ANA is for now supported only in SUSE Enterprise Linux 15.

FlexGroup can now scale up to 20PB and 400 billion files and supports additional SMB features like native file auditing, Fpolicy, Storage Level Access Guard, copy offload and inherited watches of changes notifications. FlexGroup volumes can span across multiple nodes or tier to the cloud.

With this release, NetApp has proven once again that they are following there Data Fabric strategy and they are making a highway for data to the clouds or between clouds. OnPremise systems are now even closer to clouds then they ever been.

NetApp Insight and why it’s the best conference in the IT industry

As summer is ending, the season of annual IT events has already started with VMworld World US in last month. There will be other IT following, however for me one of the best will always NetApp Insight. NetApp Insight is an annual event organized by data management company NetApp. The first time I’ve been to the event was in 2015. Over the past few years, the EMEA version of event has been held in Berlin. As usual, this year’s US event is held once again in Las Vegas, but the European edition will be this year in Barcelona, Spain in beginning of December. This is a welcome change of location for me. Not that Berlin wasn’t good, but going to conferences is great opportunity to do some quick sightseeing.

NetApp Insight 2014

And after three visits in Berlin, it already feels very familiar like home town to me, and the thrill of the unknown is already missing. As you can tell, I’m excited about Barcelona is to say the least. NetApp Insight has fascinated me by how well organized everything is, every session is carefully selected, so the participants are never bored and can always learn more useful information and acquire new knowledge. Each year there are more than 100 unique breakout sessions featuring NetApp solutions, case studies, best practices, tools and innovations.Of course, the NetApp technology mentioned a lot, but it’s done in a nice discreet way, so you don’t get the feeling you are being brainwashed, as I felt at so many events run by other technology vendors, where the organizers are very pushy and everything is all about them. NetApp Insight is well known among attendees as a very technical, not sales-oriented conference in contrast to other similar conferences. Sessions are built around NetApp product in combination with technologies from alliance partners such as VMware, Microsoft, Veaam, SAP, Oracle and others. You can hear and learn about other technologies aside from NetApp, which always comes in handy, when working on projects. Furthermore, NetApp Insight is also a great opportunity for improving and test your skills with Hands-on-Labs, free of charge NetApp Certification exams and technical training sessions. For me personally, the best feature of NetApp insight are the keynotes -with very interesting speakers like NetApp’s founder Dave Hitz (@DaveHitz) and Matt Watts (@mtjwatts), Director of Technology & Strategy.

I will never forget Dave Hitz’ story about comparing water pipes with data and how he came up with a new technology for NetApp customers to use less storage.

Dave Hitz looking to the future…

This feature is now known as deduplication and NetApp was the first company to introduce deduplication running on primary storage. Many people thought that Dave will sink the NetApp ship by providing deduplication for free to every NetApp customer and NetApp will end up selling less storage. They were wrong and NetApp was selling even more of storage and also other vendors followed the Dave Hitz idea…

In the next year’s keynote, Dave explained how he come up with the idea that NetApp needs to develop a way for customers to able to move their data seamlessly from their storage to cloud, like using plumbing pipes. The idea is now called Data Fabric. Data Fabric is now days widely used over the world as a highway for shipping data to clouds. Matt Watts had one year very interesting speech about how role of IT is changing. I find key notes like this very inspiring and they show you the vision of NetApp for the upcoming years. Most likely this year’s focus will be on cloud, as the cloud opportunity is rapidly growing. NetApp is focused on cloud innovations to create more cloud opportunities for accelerating new services. I believe modernizing IT architecture with cloud-connected flash will also be a major topic this year at Insight. Aside from the great content, one of the best benefits of attending Insight are the networking opportunities to connect with great people. I’ve enjoyed a lot the evening activities, which are always well organized, fun and overly great experience to remember. For example, I truly enjoyed the last year’s visit to the Berlin

NetApp logo build with lego

Classic Remise depot, where classic cars and very expensive ones like Bugatti Veyron are displayed, as well as the appreciation parties at iconic venues such as the at Olympic stadium and the famous Tempelhof airport in Berlin. Last but not least, last year party was in an old Berlin underground station, was definitely one to remember, featuring an awesome cover band called RPJ, which was phenomenal and definitely rocked the party. I hope they bring them back once again in Barcelona. For me, personally the only downside to Insight is the conference pass price, which is a bit high. However, if you look at what the conference has to offer to you and your business it’s a great investment, especially considering the free certifications opportunity. I honestly promise you won’t be disappointed, just like one of my customers wasn’t when I took him to conference last year. Since then he’s been explaining how he thought it will be just another trip to Berlin, but he ended being amazed at the value of the conference and changed his mind completely. When we were on the way back from Berlin, he said that NetApp Insight is definitely a conference he will be coming back to next year.

I’m excited to be back to Insight this year in Barcelona and look forward to the keynotes and meeting with my fellow United members. I’m sure Petya Stefanova (@PeytStefanova), NetApp United coach will prepare something great for NetApp United members. Last year she organized a lot of activities (Pop-Up tech talks, hand football competition…) and I can not forget to mention a dinner with NetApp A-Team members and NetApp VPs, which was a very special start of NetApp Insight week.

Artificial intelligence – computers are getting smarter and predictive much like humans

We are using AI (or Artificial intelligence) already all the time throughout our day, without being aware it. AI is present in our everyday life, every time we use SIRI, google assistant or Cortana, they know every aspect of our life – the coffee meetup with a friend every Sunday at the same time and place, our everyday commute details, our spouses’ birthdays or even the child’s next football match. Facebook’s software is even capable of recognizing famous faces and tagging them automatically without any human assistance.

According to a technology review by Google, in 2017 more than 26% of the interviewed companies planned to invest more than 15 % of their IT budget on machine learning and AI. AI is applied to solve bigger problems with a business purpose – for instance, a fast food chain is using AI to increase customer engagement and increase shop visit by pushing a targeted ad for a free ice cream on a hot afternoon. Another example of solving problems with AI – finance institutions are analyzing a large amount of data in order to discover frauds and eliminate risk. Machine learning on the other side plays an also important part at Uber. Uber uses machine learning algorithms to predict more accurate arrival times, pick up locations and so on. Furthermore, did you know that airplanes are already using AI for autonomous flying? Autopilot mode for instance is based on flight planning management systems, that defines the most efficient route from point A to be. The dedicated system analyses data from a large number of sensors and adjusts the plane speed, rate of climb, altitude accordingly throughout the flight.

As announced last week, NetApp and Nvidia have joined forces to bring to market a solution called NetApp ONTAP AI to enable companies to adopt artificial intelligence faster and easier. NetApp with its Verified Architecture Program offers customers a proven architecture which is thoroughly tested, easy and quick to deploy and within a short time to market.  NetApp ONTAP AI combines NVIDIA DGX-1 servers with NVIDIA Tesla V100 graphic processing units and a NetApp AFF A800 (also lower models are supported) with ultra-fast networking with Cisco Nexus 3232C 100Gb Ethernet switches, that are useful for inter-GPU communications by remote access (RDMA) over Converged Ethernet (RoCE). Traditional HPC infrastructures are connected to compute nodes via RDMA over InfiniBand which is providing high-bandwidth and low-latency, but now when an Ethernet technology has the same or better performance. That’s the reason why NetApp ONTAP AI uses well understood and widely deployed Ethernet technology.  Data on storage is accessed through NFS protocol using four 100GbE links. Each DGX-1 server is powered by eight Tesla V100 GPUs configured in a hybrid cube mash topology leveraging the NVIDIA NVLink technology.  The NVlink technology provides low-latency fabric for inter-GPU communications. DGX-1 is powered by NVIDIA GPU cloud, providing Nvidia GPU optimized containers for the most popular DL frameworks like Caffe2, TensorFlow, PyThorch, MXNet and Tensor RT and is incorporated with NVIDIA CUDA toolkit. NVIDIA CUDA toolkit is providing NVIDIA CUDA Basic Linear Algebra Subroutines Library (cuBLAS) and the NVIDIA CUDA Deep Neural Network Library (cuDNN). NetApp ONTAP AI simplifies deployment by eliminating design complexity and guesswork. NetApp ONTAP IA solution is scalable up to multiple racks and is capable of providing a massive throughput up to 300 GB/s and 11.4 million IOPS in a 24-node cluster. Imagine how much faster and how much more data can be analyzed with a solution with this kinda performance. When your complete system depends on a solution, the solution must be capable to be upgraded non-disruptively. One of greatest and long living NetApp’s principles is that you can start small and grow with without interruption. Similar to FlexPod, the first solution collaboration between Cisco and NetApp, NetApp ONTAP AI comes along with a single point of contact support, backed by AI experts, a winning strategy that has been proven invaluable with FlexPod customers in the past.

I believe easy access and deployment of AI solutions will make huge changes in our lifes that we are currently not capable to understand and imagine. Hopefully in the future, I solutions like NetApp ONTAP AI will contribute to discovering cure for deadly diseases like cancer and HIV,  growing food more efficiently, making transportation faster and safer and maybe even automating basic tasks like calling car service or making a dentist appointment, so we, humans, would have more time to spend with our loved ones.