cluster

Results 1 - 25 of 173Sort Results By: Published Date | Title | Company Name

The Cost of Using the Public Cloud

Published By: Dell EMC     Published Date: Apr 18, 2017
In this report we’ll explore comparing the cost of an on-site hyperconverged solution with a comparable set up in the cloud. The on-site infrastructure is a Dell EMC VxRailTM hyperconverged appliance cluster and the cloud solution is Amazon Web Services (AWS).
Tags : 
cost, cloud, hyperconverged, hyperconverged infrastructure, aws, amazon web services, dell emc
    
Dell EMC

451 Group – Overview of ScaleBase Version 2.0 SB.2.13

Published By: Scalebase     Published Date: Mar 08, 2013
Technology analyst firm 451 Research offers a brief overview of ScaleBase’s Data Traffic Manager software for dramatic scaling of MySQL databases beyond the capabilities of MySQL 5.6.
Tags : 
shard, cluster, high availability, failover, mariadb, mysql, read/write, scalability, capacity planning, scalebase, 451 group overview, research, it management, data management, business technology, data center
    
Scalebase

A Powerful New Foundation for Creating Customer Campaigns (Merkle case study)

Published By: Dell EMC     Published Date: Oct 08, 2015
Download this white paper to learn how the company deployed a Dell and Hadoop cluster based on Dell and Intel® technologies to support a new big data insight solution that gives clients a unified view of customer data.
Tags : 
    
Dell EMC

A Question of Continuity: Maximizing Email Availability for your Business

Published By: Symantec.cloud     Published Date: Sep 07, 2010
This white paper looks at the value of email availability and how it can be improved.
Tags : 
messagelabs hosted services, email continuity, disaster recovery, back up, clustering, backup and recovery, business continuity
    
Symantec.cloud

A Scalable Software Build Accelerator: Break the Build Bottleneck with Faster, More Accurate Builds

Published By: Electric Cloud     Published Date: Nov 04, 2009
ElectricAccelerator improves the software development process by reducing build times, so development teams can reduce costs, shorten time-to-market, and improve quality and customer satisfaction.
Tags : 
electric cloud, electricaccelerator, software development, dependency management, software build accelerator, visualization, cluster manager, open source, roi, clustering, return on investment, server virtualization, software testing
    
Electric Cloud

A SOLID DATA CENTER STRATEGY STARTS WITH INTERCONNECTIVITY

Published By: Equinix     Published Date: Oct 27, 2014
Connections are great. Having a network to connect to is even better. Humans have been connecting, in one form or another, throughout history. Our cities were born from the drive to move closer to each other so that we might connect. And while the need to connect hasn’t changed, the way we do it definitely has. Nowhere is this evolution more apparent than in business. In today’s landscape, business is more virtual, geographically dispersed and mobile than ever, with companies building new data centers and clustering servers in separate locations.
Tags : 
data center, enterprise, cloud, experience, hybrid, performance, strategy, interconnectivity, network, drive, evolution, landscape, server, mobile, technology, globalization, stem, hyperdigitization, consumer, networking
    
Equinix

A Solid Data Center Strategy Starts with Interconnectivity

Published By: Equinix     Published Date: Mar 26, 2015
Connections are great. Having a network to connect to is even better. Humans have been connecting, in one form or another, throughout history. Our cities were born from the drive to move closer to each other so that we might connect. And while the need to connect hasn’t changed, the way we do it definitely has. Nowhere is this evolution more apparent than in business. In today’s landscape, business is more virtual, geographically dispersed and mobile than ever, with companies building new data centers and clustering servers in separate locations. The challenge is that companies vary hugely in scale, scope and direction. Many are doing things not even imagined two decades ago, yet all of them rely on the ability to connect, manage and distribute large stores of data. The next wave of innovation relies on the ability to do this dynamically.
Tags : 
data center, interconnectivity, mobile, server clusters, innovation, data storage, storage
    
Equinix

A Technical Overview of the Oracle SPARC SuperCluster T4-4

Published By: Oracle     Published Date: Apr 04, 2012
This white paper examines how the versatile design of the Oracle SPARC SuperCluster T4-4 along with powerful, bundled virtualization capabilities makes it an ideal platform for consolidating enterprise servers and workloads and deploying apps.
Tags : 
supercluster, sparc, t4-4, workloads, oracle exalogic elastic cloud, oracle, elastic capacity, infrastructure, enterprise applications, virtualization, cloud computing, design and facilities
    
Oracle

Adieu Les Repreises Apres Sinistre

Published By: Pure Storage     Published Date: Sep 27, 2019
ENTREZ DANS UNE NOUVELLE ÈRE : CELLE DE LA VÉRITABLE CONTINUITÉ OPÉRATIONNELLE Merci pour tous les services rendus, chères reprises après sinistre. Sans vous à nos côtés toutes ces années, rien n'aurait été pareil. Oubliez la logique de sinistre/reprise des années 70. Adoptez un modèle de continuité opérationnelle adapté au monde d'aujourd'hui, en constante activité, un modèle : - Agile - Efficace - Simple SON NOM ? PURITY ACTIVECLUSTER
Tags : 
    
Pure Storage

Advances in Data Warehouse Performance

Published By: IBM     Published Date: May 30, 2008
WinterCorp analyzes IBM's DB2 Warehouse and how it addresses twin challenges facing enterprises today: improving the value derived from the torrents of information processed every day, while lowering costs at the same time. Discover why WinterCorp believes the advances in data clustering strategies and intelligent software compression algorithms in IBM's Data Warehouse improves performance of business intelligence queries by radically reducing the I/O's needed to resolve them.
Tags : 
data warehousing, data management, database management, database administration, dba, business intelligence, ibm, leveraging information, li campaign, ibm li, data integration, information management
    
IBM

Affordable, Scalable, Reliable OLTP in a Cloud and Big Data World: IBM DB2 pureScale

Published By: IBM     Published Date: Jul 05, 2016
This white paper discusses the concept of shared data scale-out clusters, as well as how they deliver continuous availability and why they are important for delivering scalable transaction processing support.
Tags : 
ibm, always on business, cloud, big data, oltp, ibm db2 purescale, networking, knowledge management, enterprise applications, data management, business technology, data center
    
IBM

Affordable, Scalable, Reliable OLTP in a Cloud and Big Data World: IBM DB2 pureScale

Published By: IBM     Published Date: Oct 13, 2016
Compare IBM DB2 pureScale with any other offering being considered for implementing a clustered, scalable database configuration see how they deliver continuous availability and why they are important. Download now!
Tags : 
data. queries, database operations, transactional databases, clustering, it management, storage, business technology
    
IBM

Alles Gute Gute Zum Ruhestand, Disaster Recovery

Published By: Pure Storage     Published Date: Sep 27, 2019
WILLKOMMEN AN BORD, GESCHÄFTSKONTINUITÄT Wir bedanken uns für die jahrelangen Dienste, Disaster Recovery. Ohne dich hätten wir all das nicht geschafft. Aber jetzt lassen wir die Disaster Recovery-Mentalität der 70er Jahre hinter uns. Es gibt jetzt ein Business Continuity-Modell, das für die Welt der Hochverfügbarkeit gemacht ist und mit diesen Eigenschaften punktet: - Agilität - Effizienz - Nahtlosigkeit SEIN NAME? PURITY ACTIVECLUSTER
Tags : 
    
Pure Storage

Amazon Redshift Spectrum: expert tips for maximizing the power of Spectrum

Published By: AWS     Published Date: Sep 05, 2018
Amazon Redshift Spectrum—a single service that can be used in conjunction with other Amazon services and products, as well as external tools—is revolutionizing the way data is stored and queried, allowing for more complex analyses and better decision making. Spectrum allows users to query very large datasets on S3 without having to load them into Amazon Redshift. This helps address the Scalability Dilemma—with Spectrum, data storage can keep growing on S3 and still be processed. By utilizing its own compute power and memory, Spectrum handles the hard work that would normally be done by Amazon Redshift. With this service, users can now scale to accommodate larger amounts of data than the cluster would have been capable of processing with its own resources.
Tags : 
    
AWS

Amazon Redshift Spectrum: expert tips for maximizing the power of Spectrum

Published By: Amazon Web Services     Published Date: Sep 05, 2018
Amazon Redshift Spectrum—a single service that can be used in conjunction with other Amazon services and products, as well as external tools—is revolutionizing the way data is stored and queried, allowing for more complex analyses and better decision making. Spectrum allows users to query very large datasets on S3 without having to load them into Amazon Redshift. This helps address the Scalability Dilemma—with Spectrum, data storage can keep growing on S3 and still be processed. By utilizing its own compute power and memory, Spectrum handles the hard work that would normally be done by Amazon Redshift. With this service, users can now scale to accommodate larger amounts of data than the cluster would have been capable of processing with its own resources.
Tags : 
    
Amazon Web Services

Anatomy of the Cloudant Data Layer

Published By: Cloudant, an IBM Company     Published Date: May 15, 2014
Learn how a Cloudant account can be hosted within a multi-tenant Cloudant cluster, or on a single-tenant cluster running on dedicated hardware hosted within a top-tier cloud provider like Rackspace or IBM SoftLayer.
Tags : 
cloudant, cloud, cloud computing, data layer, dbms, dsaas, saas, data replication, data delivery, distributed computing, mobile data systems, data integration
    
Cloudant, an IBM Company

Apache Hadoop: Is one cluster enough?

Published By: WANdisco     Published Date: Oct 15, 2014
In this Gigaom Research webinar, the panel will discuss how the multi-cluster approach can be implemented in real systems, and whether and how it can be made to work. The panel will also talk about best practices for implementing the approach in organizations.
Tags : 
wandisco, wan, wide area network, hadoop, clusters, clustering, load balancing, data, big data, data storage, storage, wide area networks, storage area networks
    
WANdisco

Application and Networking Services for OpenShift- Kubernetes Clusters

Published By: Avi Networks     Published Date: Mar 06, 2019
OpenShift-Kubernetes offers an excellent automated application deployment framework for container-based workloads. Services such as traffic management (load balancing within a cluster and across clusters/regions), service discovery, monitoring/analytics, and security are a critical component of an application deployment framework. Enterprises require a scalable, battle-tested, and robust services fabric to deploy business-critical workloads in production environments. This whitepaper provides an overview of the requirements for such application services and explains how Avi Networks provides a proven services fabric to deploy container based workloads in production environments using OpenShift- Kubernetes clusters.
Tags : 
    
Avi Networks

Apply Artificial Intelligence to Information Security Problems

Published By: BlackBerry Cylance     Published Date: Jul 02, 2018
The information security world is rich with information. From reviewing logs to analyzing malware, information is everywhere and in vast quantities, more than the workforce can cover. Artificial intelligence (AI) is a field of study that is adept at applying intelligence to vast amounts of data and deriving meaningful results. In this book, we will cover machine learning techniques in practical situations to improve your ability to thrive in a data driven world. With clustering, we will explore grouping items and identifying anomalies. With classification, we’ll cover how to train a model to distinguish between classes of inputs. In probability, we’ll answer the question “What are the odds?” and make use of the results. With deep learning, we’ll dive into the powerful biology inspired realms of AI that power some of the most effective methods in machine learning today. Learn more about AI in this eBook.
Tags : 
artificial, intelligence, enterprise
    
BlackBerry Cylance

Are Your Capacity Management Processes Fit for the Cloud Era?

Published By: VMTurbo     Published Date: Mar 25, 2015
An Intelligent Roadmap for Capacity Planning Many organizations apply overly simplistic principles to determine requirements for compute capacity in their virtualized data centers. These principles are based on a resource allocation model which takes the total amount of memory and CPU allocated to all virtual machines in a compute cluster, and assumes a defined level of over provisioning (e.g. 2:1, 4:1, 8:1, 12:1) in order to calculate the requirement for physical resources. Often managed in spreadsheets or simple databases, and augmented by simple alert-based monitoring tools, the resource allocation model does not account for actual resource consumption driven by each application workload running in the operational environment, and inherently corrodes the level of efficiency that can be driven from the underlying infrastructure.
Tags : 
capacity planning, vmturbo, resource allocation model, cpu, cloud era, it management, knowledge management, enterprise applications
    
VMTurbo

Assuring Service Level Achievement Through Dynamic Workload Automation

Published By: CA WA     Published Date: May 12, 2008
Emerging information technologies like composite and multi-tier applications, service oriented architectures (SOA), virtualization, grid and cluster computing, and Web Service-based application delivery create an extraordinary opportunity for IT managers. With these dynamic technologies, they can provide business owners with IT services that are extremely flexible and agile, driving a more dynamic and competitive business.
Tags : 
ca wa, workload automation, roi, return on investment
    
CA WA

Baanbrekende Operationele Continuïteit Bewerkstelligen en Toch de Kosten en Complexiteit in de Hand

Published By: Pure Storage     Published Date: Sep 27, 2019
In de huidige omstandigheden is operationele continuïteit een must. Nu organisaties de digitale transformatie omarmen en hun massaal toevlucht tot IT nemen, downtime kan verlammend werken. Wie technische storingen transparant het hoofd kan bieden en zo zijn bedrijf draaiende kan houden, profiteert van aanzienlijk concurrentievoordeel, grotere klantenbinding en meer bedrijfsinnovatie. Maar het behalen van de hoogste niveaus van operationele continuïteit met zero RPO’s (recovery point objectives) en 0 RTO’s (recovery time objectives), was altijd alleen weggelegd voor de grootste ondernemingen en dan slechts voor de meest bedrijfskritische toepassingen. De hiermee verbonden kosten en complexiteit zijn voor de meeste organisaties eenvoudigweg te hoog. Dat behoort nu tot het verleden. Deze whitepaper bespreekt een nieuwe active/active stretched cluster technologie waarmee bedrijven van groot tot klein zich de hoogste continuïteitniveaus kunnen veroorloven zonder de kosten en complexiteit
Tags : 
    
Pure Storage

Best Practices in High Availability Cluster MultiProcessing

Published By: IBM     Published Date: Feb 25, 2008
IBM HACMP supports a wide variety of configurations, and provides the cluster administrator with a great deal of flexibility. With this flexibility comes the responsibility to make wise choices. This paper discusses the choices that the cluster designer can make, and about the alternatives that make for the highest level of availability.
Tags : 
high availability, backup, recovery, utility computing, network management, ibm, backup and recovery
    
IBM

Big Workflow: More than Just Intelligent Workload Management for Big Data

Published By: Adaptive Computing     Published Date: Feb 06, 2014
A new approach, known as “Big Workflow,” is being created by Adaptive Computing to address the needs of these applications. It is designed to unify public clouds, private clouds, Map Reduce-type clusters, and technical computing clusters. Download now to learn more.
Tags : 
adaptive, ada[tive computing, big data, data center, workload management, servers, cloud, cloud computing, storage, data storage, backup and recovery, blade servers, storage management, data warehousing, green computing, data center design and management, business technology
    
Adaptive Computing

Bringing the Power of SAS to Hadoop

Published By: SAS     Published Date: Oct 18, 2017
Want to get even more value from your Hadoop implementation? Hadoop is an open-source software framework for running applications on large clusters of commodity hardware. As a result, it delivers fast processing and the ability to handle virtually limitless concurrent tasks and jobs, making it a remarkably low-cost complement to a traditional enterprise data infrastructure. This white paper presents the SAS portfolio of solutions that enable you to bring the full power of business analytics to Hadoop. These solutions span the entire analytic life cycle – from data management to data exploration, model development and deployment.
Tags : 
    
SAS
Start   Previous   1 2 3 4 5 6 7    Next    End
Search      

Add Research

Get your company's research in the hands of targeted business professionals.