[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

[TAF] templates://hadoop/{hadoop-namenoded.templates}



The hadoop package introduced new or modified debconf
templates. This is the perfect moment for a review to help the package
maintainer following the general suggested writing style and track
down typos and errors in the use of English language.

If someone wants to pick up this review, please answer to this mail,
in the mailing list, with an [ITR] (Intent To Review) label.

The templates file is attached.

To propose the file you reviewed for peer review, please send a [RFR]
(Request For Review) mail with the reviewed file attached...then a few
days later, when no more contributions come, a summary mail with a
[LCFC] (Last Chance For Comments) label.

Finally, after no more comments coming to the LCFC mail, you can send
the reviewed templates file as a bug report against the package.

Then, please notify the list with a last mail using a [BTS] label
with the bug number.

Helping the package maintainer to deal with induced translation
updates at that moment will be nice. If you're not comfortable with
that part of the process, please hand it off to a translator.

-- 


Template: hadoop-namenoded/format
Type: boolean
Default: false
_Description: Should the namenode's filesystem be formatted now?
 The namenode manages the Hadoop Distributed FileSystem (HDFS). Like a
 normal filesystem, it needs to be formatted prior to first use. If the
 HDFS filesystem is not formatted, the namenode daemon will fail to
 start.
 .
 This operation does not affect the "normal" filesystem on this
 computer. If you're using HDFS for the first time and don't have data
 from previous installations on this computer, it should be save to
 proceed with yes.
 .
 You can later on format the filesystem yourself with
 . 
 su -c"hadoop namenode -format" hadoop
Source: hadoop
Section: java
Priority: optional
Maintainer: Debian Java Maintainers <pkg-java-maintainers@lists.alioth.debian.org>
Uploaders: Thomas Koch <thomas.koch@ymc.ch>
Homepage: http://hadoop.apache.org
Vcs-Browser: http://git.debian.org/?p=pkg-java/hadoop.git
Vcs-Git: git://git.debian.org/pkg-java/hadoop.git
Standards-Version: 3.8.4
Build-Depends: debhelper (>= 7.4.11), default-jdk, ant (>= 1.6.0), javahelper (>= 0.28),
 po-debconf,
 libcommons-cli-java,
 libcommons-codec-java,
 libcommons-el-java,
 libcommons-httpclient-java,
 libcommons-io-java,
 libcommons-logging-java,
 libcommons-net-java,
 libtomcat6-java,
 libjetty-java (>>6),
 libservlet2.5-java,
 liblog4j1.2-java,
 libslf4j-java,
 libxmlenc-java,
 liblucene2-java,
 libhsqldb-java,
 ant-optional,
 javacc

Package: libhadoop-java
Architecture: all
Depends: ${misc:Depends}, 
 libcommons-cli-java,
 libcommons-codec-java,
 libcommons-el-java,
 libcommons-httpclient-java,
 libcommons-io-java,
 libcommons-logging-java,
 libcommons-net-java,
 libtomcat6-java,
 libjetty-java (>>6),
 libservlet2.5-java,
 liblog4j1.2-java,
 libslf4j-java,
 libxmlenc-java
Suggests: libhsqldb-java
Description: software platform for processing vast amounts of data
 This package contains the core java libraries.

Package: libhadoop-index-java
Architecture: all
Depends: ${misc:Depends}, libhadoop-java (= ${binary:Version}),
 liblucene2-java
Description: Hadoop contrib to create lucene indexes
 This contrib package provides a utility to build or update an index
 using Map/Reduce.
 .
 A distributed "index" is partitioned into "shards". Each shard corresponds
 to a Lucene instance. org.apache.hadoop.contrib.index.main.UpdateIndex
 contains the main() method which uses a Map/Reduce job to analyze documents
 and update Lucene instances in parallel.

Package: hadoop-bin
Section: misc
Architecture: all
Depends: ${misc:Depends}, libhadoop-java (= ${binary:Version}),
 default-jre-headless | java6-runtime-headless
Description: software platform for processing vast amounts of data
 Hadoop is a software platform that lets one easily write and
 run applications that process vast amounts of data.
 .
 Here's what makes Hadoop especially useful:
  * Scalable: Hadoop can reliably store and process petabytes.
  * Economical: It distributes the data and processing across clusters
                of commonly available computers. These clusters can number
                into the thousands of nodes.
  * Efficient: By distributing the data, Hadoop can process it in parallel
               on the nodes where the data is located. This makes it
               extremely rapid.
  * Reliable: Hadoop automatically maintains multiple copies of data and
              automatically redeploys computing tasks based on failures.
 .
 Hadoop implements MapReduce, using the Hadoop Distributed File System (HDFS).
 MapReduce divides applications into many small blocks of work. HDFS creates
 multiple replicas of data blocks for reliability, placing them on compute
 nodes around the cluster. MapReduce can then process the data where it is
 located.
 .
 This package contains the hadoop shell interface. See the packages hadoop-.*d
 for the hadoop daemons.

Package: hadoop-daemons-common
Section: misc
Architecture: all
Depends: ${misc:Depends}, hadoop-bin (= ${binary:Version}), daemon, adduser,
 lsb-base (>= 3.2-14)
Description: Creates user and directories for hadoop daemons
 Prepares some common things for all hadoop daemon packages:
  * creates the user hadoop
  * creates data and log directories owned by the hadoop user
  * manages the update-alternatives mechanism for hadoop configuration
  * brings in the common dependencies

Package: libhadoop-java-doc
Section: doc
Architecture: all
Depends: ${misc:Depends}, libhadoop-java (= ${binary:Version})
Description: Contains the javadoc for hadoop
 contains the api documentation of hadoop

Package: hadoop-tasktrackerd
Section: misc
Architecture: all
Depends: ${misc:Depends}, hadoop-daemons-common (= ${binary:Version})
Description: Task Tracker for Hadoop
 The Task Tracker is the Hadoop service that accepts MapReduce tasks and
 computes results. Each node in a Hadoop cluster that should be doing
 computation should run a Task Tracker.

Package: hadoop-jobtrackerd
Section: misc
Architecture: all
Depends: ${misc:Depends}, hadoop-daemons-common (= ${binary:Version})
Description: Job Tracker for Hadoop
 The jobtracker is a central service which is responsible for managing
 the tasktracker services running on all nodes in a Hadoop Cluster.
 The jobtracker allocates work to the tasktracker nearest to the data
 with an available work slot.

Package: hadoop-namenoded
Section: misc
Architecture: all
Depends: ${misc:Depends}, hadoop-daemons-common (= ${binary:Version})
Description: Name Node for Hadoop
 The Hadoop Distributed Filesystem (HDFS) requires one unique server, the
 namenode, which manages the block locations of files on the filesystem.

Package: hadoop-secondarynamenoded
Section: misc
Architecture: all
Depends: ${misc:Depends}, hadoop-daemons-common (= ${binary:Version})
Description: Secondary Name Node for Hadoop
 The Secondary Name Node is responsible for checkpointing file system images.
 It is _not_ a failover pair for the namenode, and may safely be run on the
 same machine.

Package: hadoop-datanoded
Section: misc
Architecture: all
Depends: ${misc:Depends}, hadoop-daemons-common (= ${binary:Version})
Description: Data Node for Hadoop
 The Data Nodes in the Hadoop Cluster are responsible for serving up
 blocks of data over the network to Hadoop Distributed Filesystem
 (HDFS) clients.

Attachment: signature.asc
Description: Digital signature


Reply to: