• Anglický jazyk

Cloud infrastructure

Autor: Source: Wikipedia

Source: Wikipedia. Pages: 27. Chapters: Apache Hadoop, Amazon Elastic Compute Cloud, Rackspace Cloud, Iland, OpenNebula, Kaavo, CloudSigma, TurnKey Linux Virtual Appliance Library, Eucalyptus, Jitscale, OpSource, Sector/Sphere, Cloudant, Cloud.com, Sun Cloud,... Viac o knihe

Na objednávku, dodanie 2-4 týždne

13.14 €

bežná cena: 14.60 €

O knihe

Source: Wikipedia. Pages: 27. Chapters: Apache Hadoop, Amazon Elastic Compute Cloud, Rackspace Cloud, Iland, OpenNebula, Kaavo, CloudSigma, TurnKey Linux Virtual Appliance Library, Eucalyptus, Jitscale, OpSource, Sector/Sphere, Cloudant, Cloud.com, Sun Cloud, Cloud.bg, OpenStack, Enomaly Inc, Linode, Cloudkick, Skytap, Nimbula, EnStratus, Apache Hama, RightScale, Scalr, ElasticHosts, GoGrid, Cloudera, BigCouch, Amazon Machine Image, Riak, AppNexus, Nimbus, Pig, C12G, Wakame-vdc, Amazon Elastic Block Store, Apache Hive, Amazon Simple Email Service. Excerpt: Apache Hadoop is a software framework that supports data-intensive distributed applications under a free license. It enables applications to work with thousands of nodes and petabytes of data. Hadoop was inspired by Google's MapReduce and Google File System (GFS) papers. Hadoop is a top-level Apache project being built and used by a global community of contributors, using the Java programming language. Yahoo! has been the largest contributor to the project, and uses Hadoop extensively across its businesses. Hadoop was created by Doug Cutting, who named it after his son's toy elephant. It was originally developed to support distribution for the Nutch search engine project. Hadoop consists of the Hadoop Common, which provides access to the filesystems supported by Hadoop. The Hadoop Common package contains the necessary JAR files and scripts needed to start Hadoop. The package also provides source code, documentation, and a contribution section which includes projects from the Hadoop Community. For effective scheduling of work, every Hadoop-compatible filesystem should provide location awareness: the name of the rack (more precisely, of the network switch) where a worker node is. Hadoop applications can use this information to run work on the node where the data is, and, failing that, on the same rack/switch, so reducing backbone traffic. The Hadoop Distributed File System (HDFS) uses this when replicating data, to try to keep different copies of the data on different racks. The goal is to reduce the impact of a rack power outage or switch failure so that even if these events occur, the data may still be readable. A multi-node Hadoop clusterA small Hadoop cluster will include a single master and multiple worker nodes. The master node consists of a jobtracker, tasktracker, namenode, and datanode. A slave or worker node consists of a datanode and tasktracker, though it is possible to have data-only worker nodes, and compute-only worker nodes; these are normally only use

  • Vydavateľstvo: Books LLC, Reference Series
  • Rok vydania: 2014
  • Formát: Paperback
  • Rozmer: 246 x 189 mm
  • Jazyk: Anglický jazyk
  • ISBN: 9781155345918

Generuje redakčný systém BUXUS CMS spoločnosti ui42.