自行车 storm elte storm是什么牌子子价格多少

在售车款--2014款--
经济向电辅自行车—storm ebike
& & storm&ebike,有一款在设计团队设计团队手中诞生的自行车产品,以其他产品不同,storm&ebike并没有打出什么新奇的功能或是吸睛的外观,有的只是妥妥的性价比,还有3000多元实惠的价位。
& & 通过众筹,这一设计团队将在3月开始生产这款电辅自行车,预计5月就可以见到成车了,有着多年的制造业经验,设计团队把成本降到了最低,当然是在保证质量的前提下。
& & 这款车的车架重约20公斤,比起正常自行车越是重不少,钢制的车架,节约成本的最好选材,坚固耐用,配上一款厚重的雪地轮组和轮胎,可以无视各种路面,这应该是考虑到storm&ebike硬叉设计所做的补偿吧,不过得益于这个设计配合电辅系统,用storm&ebike在沙滩和雪地中骑行应该会非常轻松吧。
& & 电辅方面,storm&ebike设计有简洁方便的&换挡&系统和一款强大的锂电池及380瓦36伏的直流驱动电机,最高时速能达到32km,电池用一个半小时就能充满电,行驶50到80公里完全不是问题,另外电机可以通过开关关闭,改成用脚踏骑行的模式,凭借着storm&ebike的重量想要锻炼一下自己的腿部肌肉应该会非常容易。
关注微信公众号:bikehomecc
看五分钟,买不再被坑,自行车资讯,,,买卖尽在,下载手机。
转载请注明来源及链接: 否则后果自负!
已有人参与,
经销商推荐
经销商名称销售热线商家资讯
360021-028-3
更多关于的资讯
更多关于的促销信息
报价:暂无报价
4张图片&4篇文章
后避震器图片
同厂商车型排行
周排行总排行热门文章
12345678910
热门车型热门品牌
Copyright ©
All rights reserved.
自行车之家Comprehensive Training Services for Developers.
Better Development Support for Developers.
1. Overview
Intended Audience
This document is provided for users who must implement Storm secondary development for FusionInsight HD. This document is intended for development personnel who are experienced in Java development.
Introduction
Storm is a distributed, reliable, and error-tolerant data stream processing system. Storm delegates work tasks to components of different types, and each component processes a specific simple task. Storm processes big data streams in real time and unlimited data streams in reliable mode.
Storm applies to real-time analytic, online machine learning, continuous computation, and distributed Extract, Transform, and Load (ETL). It is scalable and fault-tolerant, and is easy to set up and operate. This ensures data processing.
Storm has the following features:
● &&Wide applications
● &&Scalable
● &&Free from data loss
● &&Fault-tolerant
● &&Language-independent
Basic Concepts
A computing stream chart, in which each node contains processing logic and lines between nodes specify data flow between nodes.
A component that generates source data streams in Topology. Spout reads data from an external data source and converts the data into source data inside Topology.
A component that receives data from Topology and then processes data. Bolt can perform operations, such as filtering, executing functions, combination, and writing data in a database.
Basic unit for transferring messages once.
A set of (infinite) elements, each of which belongs to a same schema. Each element is related to logic time. That is, streams have the tuple and time attributes. Any elements can be expressed in the format of Element&tuple,Time&, in which tuple includes data structures and content, and Time is the logic time of data.
keytab file
A key file for storing user information. Applications use the key file for API authentication on FusionInsight HD.
2. Development Environment Preparation
This guide introduces the Eclipse sample project and common interfaces of the Streaming component of Huawei FusionInsight Stream, enabling development personnel to quickly get familiar with Storm development.& To execute sample codes of Storm of Huawei FusionInsight Stream, perform the following operations:
Prepare clients for developing and submitting applications. Generally, applications are developed in Windows and submitted in Linux.
Procedure for Developing an Application
Step1&&Verify that the Streaming component of Huawei FusionInsight Stream has been installed and is running properly.
Step2&&Install Eclipse and Java Development Kit (JDK) on the client. Requirements are as follows:
● &&Use Eclipse 3.0 or later.
● &&Use JDK 1.7 or later.
Step3&&Install the Java Cryptography Extension (JCE) file that maps the JDK version. To obtain the software, visit the official website.
Copy the JAR files to the following directory.
● &&Linux: &java-home&/lib/security
● &&Windows: &java-home&\lib\security
In the paths, &java-home& indicates the installation directory of JRE. The following table&describes the directory.
Table&Description about &java-home& directories
Linux: /home/user1/jdk1.8.0
Linux: /home/user1/jdk1.8.0/jre
Windows: C:\jdk1.8.0
Windows: C:\jdk1.8.0\jre
Step4&&Ensure that the time difference between the client and the FusionInsight Stream cluster is less than 5 minutes.
Step5&&Download the Streaming client program.
Log in to FusionInsight Manager.
Enter the address in the address box of your browser. The address format is http://WebService floating IP address of FusionInsight Manager:8080/web.
For example, enter http://10.0.0.1:8080/web.
Choose Services & Streaming & Download Client & All Client Files and download the client program to the local computer.
Step6&&Decompress the Streaming client program, and double-click install.bat to automatically configure the client project.
The Streaming/storm-examples folder in the directory where the package is decompressed is the sample project folder.
Step7&&Configure network connections for the client to ensure that the local computer can communicate with the hosts listed in hosts in the directory where the package is decompressed.
Directory containing local hosts files:
● &&Windows: C:\WINDOWS\system32\drivers\etc\hosts
● &&Linux: /etc/hosts
Step8&&Import example projects into the Eclipse development environment.
Choose File & Import & General & Existing Projects into Workspace & Next & Browse.
The dialog box for browsing directories is displayed.
Select the sample project folder and click Finish.
Step9&&Set the Eclipse text file coding format to prevent invalid characters.
On the Eclipse menu bar, choose Window & Preferences.
The Preferences window is displayed.
Choose General & Workspace in the navigation tree, select Other in the Text file encoding area, set the parameter to UTF-8, click Apply, and then click OK, as shown in the following figure.
Figure Setting the Eclipse coding format
Procedure for Submitting an Application
Step1&&Verify that the Streaming component of Huawei FusionInsight Stream has been installed and is running properly.
Step2&&Ensure that the time difference between the client and the FusionInsight Stream cluster is less than 5 minutes.
Step3&&Download the Streaming client program.
Log in to FusionInsight Manager.
Enter the address in the address box of your browser. The address format is http://WebService floating IP address of FusionInsight Manager:8080/web.
For example, enter http://10.0.0.1:8080/web.
Choose Services & Streaming & Download Client and download the client program to the local computer.
Step4&&Upload the installation package to a Linux server.
For example, upload the installation package to /opt/client_install on the Linux server.
Step5&&Log in to a Linux server, access /opt/client_install, and decompress the client software package in the current directory.
Step6&&Access the /opt/client_install directory where the client software package is decompressed and run the following command to install the Streaming client:
install.sh Client installation directory
For example, if the client installation directory is /opt/Streaming_Client, run the following command:
./install.sh& /opt/Streaming_Client
Step7&&Initialize client environment variables.
Access the installation directory /opt/Streaming_Client and run the following command to import environment variables:
source bigdata_env
Step8&&If the security service is enabled for a cluster, log in in security mode.
Obtain a Human-Machine user from the administrator for FusionInsight Stream platform login authentication. For example, the account is john.
For details about how to obtain the user, see Administrator Guide.
Run a kinit command to log in in security mode by using a Human-Machine user.
kinit user name
For example, run the following command:
kinit john
Enter a password as prompted. If no error message appears, kerberos authentication is completed for the user.
Step9&&Access Streaming/streaming-0.9.2/bin in the installation directory and run the following command:
./storm list
If task information indicating that the streaming cluster properly runs is properly recorded, the client is successfully installed.
3. Development Guidelines
Application Instance
The procedure for a user to develop an application to calculate the number of times that each word appears in a random text is as follows:
Create a Spout to generate random texts.
See 4.1 Creating a Spout.
Create a Bolt to split a random text into words.
See 4.2 Creating a Bolt.
Create a Blot to calculate the number of times that each word appears.
See 4.2 Creating a Bolt.
Create a Topology.
See 4.3 Creating a Topology.
For details about certain code, see 4. Example Codes. For details about all code, see the Storm-examples sample project.&
6.4 Example Codes
4.1 Creating a Spout
Function Description
A Spout is a message source of Storm and message producer of Topology. Generally, a message source reads data from an external source and sends messages (Tuple) to Topology.
One message source can send multiple message streams, and therefore, OutputFieldsDeclarer.declarerStream can be used to define multiple streams, and then SpoutOutputCollector emits specific streams.
Code Sample
The following code snippets are in the com.huawei.streaming.storm.example.RandomSentenceSpout class, and these code snippets are used to split strings into words.
/**&&*&{@inheritDoc}&&*/&@Override&public&void&nextTuple()&{&&&&&Utils.sleep(100);&&&&&String[]&sentences&=&&&&&&&&&new&String[]&{"the&cow&jumped&over&the&moon",&&&&&&&&&&&&&&&&&&&&&&&&"an&apple&a&day&keeps&the&doctor&away",&&&&&&&&&&&&&&&&&&&&&&&"four&score&and&seven&years&ago",&&&&&&&&&&&&&&&&&&&&&&&&"snow&white&and&the&seven&dwarfs",&&&&&&&&&&&&&&&&&&&&&&&&"i&am&at&two&with&nature"};&&&&&String&sentence&=&sentences[random.nextInt(sentences.length)];&&&&&collector.emit(new&Values(sentence));&}
4.2 Creating a Bolt
Function Description
All message processing logic is encapsulated in Bolts. Bolts provide multiple functions, such as filtering and aggregation.
If other topology operators, except for Bolt, exist, OutputFieldsDeclarer.declareStream can be used to define streams, and OutputCollector.emit can be used to select streams to be emitted.
Code Sample
The following code snippets are in the com.huawei.streaming.storm.example.SplitSentenceBolt class, and these code snippets are used to split a statement into words and send the words.&
/**&&*&{@inheritDoc}&&*/&@Override&public&void&execute(Tuple&input,&BasicOutputCollector&collector)&{&&&&&String&sentence&=&input.getString(0);&&&&&String[]&words&=&sentence.split("&");&&&&&for&(String&word&:&words)&&&&&{&&&&&&&&&word&=&word.trim();&&&&&&&&&if&(!word.isEmpty())&&&&&&&&&{&&&&&&&&&&&&&word&=&word.toLowerCase();&&&&&&&&&&&&&collector.emit(new&Values(word));&&&&&&&&&}&&&&&}&}
The following code snippets are in the com.huawei.streaming.storm.example.WordCountBolt class, and these code snippets are used to calculate the number of received words.
&&&&@Override&&&&&&public&void&execute(Tuple&tuple,&BasicOutputCollector&collector)&&&&&&{&&&&&&&&&&String&word&=&tuple.getString(0);&&&&&&&&&&Integer&count&=&counts.get(word);&&&&&&&&&&if&(count&==&null)&&&&&&&&&&{&&&&&&&&&&&&&&count&=&0;&&&&&&&&&&}&&&&&&&&&&count++;&&&&&&&&&&counts.put(word,&count);&&&&&&&&&&System.out.println("word:&"&+&word&+&",&count:&"&+&count);&&&&&&}
4.3 Creating a Topology
Function Description
A topology is a directed acyclic graph (DAG) consisting of Spouts and Bolts.
Applications are submitted in storm jar mode. Therefore, a function for creating a topology must be invoked in the main function, and the class to which the main function belongs must be specified in storm jar parameters.
Code Sample
The following code snippets are in the com.huawei.streaming.storm.example.WordCountTopology class, and these code snippets are used to create and submit applications:
&&&public&static&void&main(String[]&args)&&&&&&&&&throws&Exception&&&&&{&&&&&&&&&TopologyBuilder&builder&=&buildTopology();&&&&&&&&&&&&&&&&&&/*&&&&&&&&&&*&Tasks&can&be&submitted&in&the&following&three&modes:&&&&&&&&&&&*&1.&Command&line&submit.&In&this&mode,&a&user&must&copy&an&application&.jar&package&to&a&client&and&run&related&commands&on&the&client.&&&&&&&&&&*&2.&Remote&submit.&In&this&mode,&a&user&must&package&application&.jar&packages&and&execute&the&main&method&of&the&eclipse.&&&&&&&&&&&*&3.&localSubmit.&In&this&mode,&a&user&must&run&an&application&for&test&on&a&local&computer.&&&&&&&&&&&*&The&Command&line&submit&and&Remote&submit&modes&supports&both&security&and&non-security&modes.&&&&&&&&&&&*&The&localSubmit&mode&supports&the&non-security&mode&only.&&&&&&&&&&&*&&&&&&&&&&&*&A&user&can&select&only&one&mode&for&submitting&a&task.&By&default,&the&mode&by&using&the&CLI&is&used.&To&use&another&mode,&delete&code&comments.&&&&&&&&&&*/&&&&&&&&&&&&&&&&&&cmdSubmit(builder,&null);&&&&&&&&&&&&&&&&&&//remoteSubmit&&&&&&&&&//remoteSubmit(builder);&&&&&&&&&&&&&&&&&&//localSubmit,&for&test&&&&&&&&&//localSubmit(builder);&&&&&}&&&&&&&&&&private&static&void&cmdSubmit(TopologyBuilder&builder,&Config&conf)&&&&&&&&&throws&AlreadyAliveException,&InvalidTopologyException,&NotALeaderException,&AuthorizationException&&&&&{&&&&&&&&&if&(conf&==&null)&&&&&&&&&{&&&&&&&&&&&&&conf&=&new&Config();&&&&&&&&&}&&&&&&&&&/**&&&&&&&&&&*&Command&line&submit&&&&&&&&&&*&The&procedure&is&as&follows:&&&&&&&&&&*&Package&packages&and&then&submit&the&task&on&the&client&CLI.&&&&&&&&&&&*&&&In&remoteSubmit&mode,&package&application&.jar&packages&and&other&externally&dependent&.jar&packages&into&one&.jar&package.&Other&externally&dependent&.jar&packages&are&dependent&by&user&programs,&not&provided&by&the&excemple&project.&&&&&&&&&&&*&&&Run&the&storm&-jar&on&a&storm&client.&&&&&&&&&&&*&&&&&&&&&&&*&In&security&environment,&before&submitting&a&task&in&a&client&CLI,&run&the&kinit&command&to&log&in&in&security&mode.&&&&&&&&&&&*&&&&&&&&&&&*&Run&the&following&command:&&&&&&&&&&&*&./storm&jar&../example/example.jar&com.huawei.streaming.storm.example.WordCountTopology&&&&&&&&&&*/&&&&&&&&&conf.setNumWorkers(1);&&&&&&&&&StormSubmitter.submitTopology(TOPOLOGY_NAME,&conf,&builder.createTopology());&&&&&}&&&&&&&&&&private&static&void&localSubmit(TopologyBuilder&builder)&&&&&&&&&throws&InterruptedException&&&&&{&&&&&&&&&Config&conf&=&new&Config();&&&&&&&&&conf.setDebug(true);&&&&&&&&&conf.setMaxTaskParallelism(3);&&&&&&&&&LocalCluster&cluster&=&new&LocalCluster();&&&&&&&&&cluster.submitTopology(TOPOLOGY_NAME,&conf,&builder.createTopology());&&&&&&&&&Thread.sleep(10000);&&&&&&&&&cluster.shutdown();&&&&&}&&&&&&&&&&&&&&&&&&&private&static&void&remoteSubmit(TopologyBuilder&builder)&&&&&&&&&throws&AlreadyAliveException,&InvalidTopologyException,&NotALeaderException,&AuthorizationException,&&&&&&&&&IOException&&&&&{&&&&&&&&&Config&config&=&createConf();&&&&&&&&&&//Preparations&in&security&mode&&&&&&&&&if&(isSecurityModel())&&&&&&&&&{&&&&&&&&&&&&&securityPrepare();&&&&&&&&&}&&&&&&&&&String&userJarFilePath&=&"User&.jar&package&address";&&&&&&&&&System.setProperty(STORM_SUBMIT_JAR_PROPERTY,&userJarFilePath);&&&&&&&&&&cmdSubmit(builder,&config);&&&&&}&&&&&&&&&&&&private&static&TopologyBuilder&buildTopology()&&&&&{&&&&&&&&&TopologyBuilder&builder&=&new&TopologyBuilder();&&&&&&&&&builder.setSpout("spout",&new&RandomSentenceSpout(),&5);&&&&&&&&&builder.setBolt("split",&new&SplitSentenceBolt(),&8).shuffleGrouping("spout");&&&&&&&&&builder.setBolt("count",&new&WordCountBolt(),&12).fieldsGrouping("split",&new&Fields("word"));&&&&&&&&&return&&&&&&}&
5. Running an Application
Process of Submitting Code by Using CLI
Export a specific .jar package, such as example.jar, on Eclipse.
Right-click the storm_examples project and choose export from the shortcut menu, as shown in the following figure.
Figure Choosing export from the Eclipse shortcut menu
Select JAR file on the Export panel and click Next, as shown in the following figure.
Figure Exporting a sample project
Select a src directory and an export directory and click finish, as shown in the following figure.
Figure Selecting a file to be exported
Copy the .jar package to a specific directory in Linux by using WinScp and set permission 600.
For example, /home/example/streaming/example.jar.
chmod 600 /home/example/streaming/example.jar
Run commands in the streaming-0.9.2/bin directory in the Streaming installation directory to submit an application.
Run the following commands:
./storm jar /home/example/streaming/example.jar
&com.huawei.streaming.storm.example.WordCountTopology wordcount
Before submitting example.jar, verify that you have logged in in security mode. For details about security login, see Administrator Guide.
Run the ./storm list command to view the submitted application. If the wordcount application is found, the task is successfully submitted.
Procedure for Remotely Submitting Code on Eclipse
Export a specific .jar package, such as example.jar, on Eclipse.
Right-click the storm_examples project and choose export from the shortcut menu, as shown in the following figure.
Figure Choosing export from the Eclipse shortcut menu
Select JAR file on the Export panel and click Next, as shown in the following figure.
Figure&Exporting a sample project
Select a src directory and an export directory and click finish, as shown in the following figure. For example, export the sample project to E:\\example.jar.
Figure Selecting a file to be exported
Obtain a Machine-Machine user for user information authentication from an administrator. The user must contain a user keytab file and the krb5.conf file.
For details about how to obtain the user and modify password policies, see Administrator Guide.
Copy the user keytab file and krb5.conf file to the Conf directory of the Eclipse project.
Modify the WordCountTopology.java class and submit an application in remoteSubmit mode. Rename the user keytab file, change the user principal name, and the Jar file address.
Verify that the difference between the Eclipse client time and storm cluster time is less than or equal to five minutes.
Ensure that the mappings between the host names and service IP addresses of all the hosts in the remote cluster are configured in the local hosts file.
Execute the Main method of the WordCountTopology.java class to submit an application.
Viewing the Result
Log in to FusionInsight Manager.
Enter the address in the address box of your browser. The address format is http://WebService floating IP address of FusionInsight Manager:8080/web.
For example, enter http://10.0.0.1:8080/web.
Choose Services & Streaming and click to access Streaming UI, as shown in the following figure.
Figure Streaming Service management page
Click the wordcount application in Storm UI to view application running status, as shown in the following figure.
Figure Streaming application execution page
Topology stats collects statistics about the total volume of data sent between operators in different time segments.
Spouts collects statistics about the total number of messages sent in the period from the time when the spout operator starts to the current time. Bolts calculates the total number of messages sent by the Count and split operators, as shown in the following figure.
Figure Total volume of data sent by the Streaming application operator
6. Storm-HDFS Development Guideline
This chapter is available only to cross-cluster access between FusionInsight HD and FusionInsight HD clusters. If the .jar package described in this chapter and version information in paths are different from actual ones, use the actual ones.
Security login is classified into ticket login and keytab file login, and procedures for these two login modes are the same. The ticket login mode is an open-source capability and requires manual ticket upload, which may cause reliability and usability problems. Therefore, keytab file login is recommended.
Procedure for Developing an Application
Step1&&Verify that the Streaming component of Huawei FusionInsight HD has been installed and is running properly.
Step2&&Verify that mapping between host names and service IP addresses of all hosts in two mutually entrusted clusters has been configured in /etc/hosts of all hosts in the clusters.
Step3&&Verify that time between the two entrusted clusters is the same.
Step4&&Import the example project into the Eclipse development environment, as shown in 2. Development Environment Preparation.
Step5&&If the security service is enabled for the clusters, the following login modes are available:
● &&keytab mode: Obtain a Human-Machine user from an administrator for login to the FusionInsight HD platform and authentication and a keytab file for the user.
● &&Ticket mode: Obtain a Human-Machine user from the administrator for subsequent secure login, enable the renewable and forwardable functions of the Kerberos service, and restart the cluster after the functions are enabled.
For details about how to obtain the user and a keytab file, see Administrator Guide.
The default validity period of a user password is 90 days, and therefore, the validity period of the obtained keytab file is 90 days. To prolong the validity period of the keytab file, modify user password policies and obtain the keytab file again. For details, see Administrator Guide.
The parameters for enabling the renewable and forwardable functions of the Kerberos service are on the System tab page of the Kerberos configuration page.
Step6&&Download the HDFS client program. For details, see HDFS Development Guide.
Step7&&Decompress the HDFS client program downloaded to a local computer. After decompression, the core-site.xml, hdfs-site.xml, and ssl-client.xml files are generated in FusionInsight_VXXXRXXXCXXSPCXXX_Services_ClientConfig\HDFS\config.
1. keytab mode: In this mode, a user must add the obtained configuration files and the keytab file to the conf folder of the local eclipse project and add the conf folder to build path.
2. Ticket mode: In this mode, a user must add the obtained configuration files to the conf folder of the local eclipse project .
The default name of the obtained keytab file is user.keytab. To change the name, a user can directly change the name but specify the new file name as a parameter when submitting a task.
Step9&&Upload the downloaded HDFS client installation package to the Linux host on the target client and install the HDFS client. For details, see HDFS Development Guide.
Step 10&&Copy related .jar packages to a specific blank directory on the Linux host on which a Streaming client is installed. For example, .jar packages of HDFS must be remotely copied to /opt/jarsource on Host1 because the HDFS client and Streaming client are installed on different hosts.& For details about how to obtain related jar packages, see Step11 to Step15.
Step 11&&Find HDFS/hadoop/share/hadoop/common/lib in the directory where the HDFS client is installed and copy the following .jar packages to /opt/jarsource:
commons-cli-1.2.jarcommons-collections-3.2.1.jarcommons-configuration-1.6.jarguava-11.0.2.jarhadoop-*.jarprotobuf-java-2.5.0.jarhtrace-core-3.1.0-incubating.jar
Step 12&&Find HDFS/hadoop/share/hadoop/common in the directory where the HDFS client is installed and copy hadoop-*.jar in HDFS/hadoop/share/hadoop/common to /opt/jarsource.
Step 13&&Find HDFS/hadoop/share/hadoop/hdfs in the directory where the HDFS client is installed and copy hadoop-hdfs-*.jar in HDFS/hadoop/share/hadoop/hdfs and hadoop-hdfs-client-*.jar in other lib subdirectories to /opt/jarsource.
Step 14&&Find Streaming/streaming-0.9.2/examples/storm-starter in the directory where the Streaming client is installed and copy storm-starter-topologies-0.9.2-incubating.jar in Streaming/streaming-0.9.2/examples/storm-starter to /opt/jarsource.
Step 15&&Find /Streaming/streaming-0.9.2/external/storm-hdfs in the directory where the Streaming client is installed and upload storm-hdfs*.jar in /Streaming/streaming-0.9.2/external/storm-hdfs to /opt/jarsource.
Step 16&&Use WinScp to copy all .jar packages in /opt/jarsource to the lib directory of the local eclipse project and add all .jar packages in the lib directory to build path. If different versions of a same .jar package exist, retain the later version.
Code Sample
Create a Topology.
&&public&static&void&main(String[]&args)&throws&Exception&&&&{&&&&&TopologyBuilder&builder&=&new&TopologyBuilder();&&&&&&&&&&//&Separator.&Use&&&|&&&to&replace&the&default&&&,&&&to&separate&fields&in&tuple.&&&&&&//&Mandatory&HdfsBolt&parameters&&&&&RecordFormat&format&=&new&DelimitedRecordFormat()&&&&&&&&&&&&&.withFieldDelimiter("|");&&&&&&//&Synchronize&policies&for&file&systems&for&every&1000&tuples.&&&&&&//&Mandatory&HdfsBolt&parameters&&&&&SyncPolicy&syncPolicy&=&new&CountSyncPolicy(1000);&&&&&&//&File&size&cyclic&policy.&If&the&size&of&a&file&reaches&5&MB,&the&file&is&written&from&the&beginning.&&&&&&//&Mandatory&HdfsBolt&parameters&&&&&FileRotationPolicy&rotationPolicy&=&new&FileSizeRotationPolicy(5.0f,&Units.MB);&&&&&&//&Objective&file&written&to&hdfs&&&&&//&Mandatory&HdfsBolt&parameters&&&&&FileNameFormat&fileNameFormat&=&new&DefaultFileNameFormat()&&&&&&&&&&&&&.withPath("/user/foo/");&&&&&&&//Create&HdfsBolt.&&&&&HdfsBolt&bolt&=&new&HdfsBolt()&&&&&&&&&&&&&.withFileNameFormat(fileNameFormat)&&&&&&&&&&&&&.withRecordFormat(format)&&&&&&&&&&&&&.withRotationPolicy(rotationPolicy)&&&&&&&&&&&&&.withSyncPolicy(syncPolicy);&&&&&&&&&&//spout&is&a&random&statement&spout.&&&&&builder.setSpout("spout",&new&RandomSentenceSpout(),&1);&&&&&&builder.setBolt("split",&new&SplitSentence(),&1).shuffleGrouping("spout");&&&&&builder.setBolt("count",&bolt,&1).fieldsGrouping("split",&new&Fields("word"));&&&&&&//Add&plugin&required&for&kerberos&authentication&to&a&list.&The&security&mode&is&mandatory.&&&&&&List&String&&auto_tgts&=&new&ArrayList&String&();&&&&&&//keytab&mode&&&&&auto_tgts.add("backtype.storm.security.auth.kerberos.AutoTGTFromKeytab");&&//Ticket&mode,&not&recommended.&The&ticket&mode&and&keytab&mode&cannot&co-exist.&&&&&&//auto_tgts.add("backtype.storm.security.auth.kerberos.AutoTGT");&&&&&//auto_tgts.add("org.apache.mon.security.AutoHDFS");&&&&&&&&&&Config&conf&=&new&Config();&&&&&//Write&the&plugin&list&configured&on&a&client&to&a&specific&config&item.&The&security&mode&is&mandatory.&&&&&&conf.put(Config.TOPOLOGY_AUTO_CREDENTIALS,&auto_tgts);&&&&&&&&&&if(args.length&&=&2)&&&&&{&&&&&&&&&//Change&the&default&keytab&file&name&and&specify&the&new&keytab&file&name&as&a&parameter.&&&&&&&&&&conf.put(Config.STORM_CLIENT_KEYTAB_FILE,&args[1]);&&&&&}&&&&&&//Submit&a&topology&by&using&a&CLI.&&&&&&StormSubmitter.submitTopology(args[0],&conf,&builder.createTopology());&&&&&&&&}
Running Clients and Viewing Results
Export .jar packages. HDFS-related .jar packages are used, and therefore, the packages cannot be exported by using the export function of Eclipse, but must be packaged by using the packaging tool provided by Streaming.
Export the .jar packages of the local eclipse project and use WinScp to copy the packages to /opt/jarsource. For details, see 5. Running an Application.
Use WinScp to copy all configuration files except storm.yaml in the conf folder of the eclipse project to /opt/jarsource.
Figure Examples of .jar packages
Use the Streaming packaging tool to package the .jar packages. Find Streaming/streaming-0.9.2/bin in the Streaming client directory in Linux and run ./streaming-jartool.sh [input] [output]. The /opt/jarsource directory serves as an input directory, and any directory can serve as the output directory. In Windows, copy /opt/jarsource to a local computer and run streaming-jartool.cmd. In the specific output file directory, source.jar is generated.
If you run the command in the preceding step in Windows, use WinScp to copy the finally obtained .jar packages to a specific directory in Linux and set permission to 600.
For example, /home/example/streaming/source.jar.
chmod 600 /home/example/streaming/source.jar
Run commands in the streaming-0.9.2/bin directory in the Streaming installation directory to submit an application.
Run the following commands:
./storm jar /home/example/streaming/source.jar
com.huawei.streaming.storm.example.SimpleHDFSTopology hdfs-test
To change the keytab file name to huawei.keytab in keytab mode, run the following commands:
./storm jar /home/example/streaming/source.jar
com.huawei.streaming.storm.example.SimpleHDFSTopology hdfs-test huawei.keytab
Before submitting source.jar, verify that you have logged in in kerberos security mode and that the login user in keytab mode is the same as the user to which the uploaded keytab file belongs. For details about user security login, see Administrator Guide.
Run the ./storm list command to view the submitted application. If the hdfs-test application is found, the task is successfully submitted.
After successfully submitting the topology, log in to the HDFS cluster to view the topology.
To log in in ticket mode, perform the following operations to regularly upload a ticket. The interval for uploading the ticket depends on the deadline for updating the ticket:
&&&Add the following content to a new line at the end of /Streaming/streaming-0.9.2/conf/storm.yaml in the Streaming client installation directory:
topology.auto-credentials:
- backtype.storm.security.auth.kerberos.AutoTGT
&&&Run the ./storm upload-credentials hdfs-test command.
7. Storm-HBase Development Guideline
This chapter is available only to cross-cluster access between FusionInsight Stream and FusionInsight HD clusters. If the .jar package described in this chapter and version information in paths are different from actual ones, use the actual ones.
Security login is classified into ticket login and keytab file login, and procedures for these two login modes are the same. The ticket login mode is an open-source capability and requires manual ticket upload if the ticket expires, which may cause reliability and usability problems. Therefore, keytab file login is recommended.
Procedure for Developing an Application
Step1&&Verify that the Streaming component of Huawei FusionInsight Stream has been installed and is running properly.
Step2&&Verify that mapping between host names and service IP addresses of all hosts in two mutually entrusted clusters has been configured in &filepath&/etc/hosts&/filepath& of all hosts in the clusters.
Step3&&Verify that time between the two entrusted clusters is the same.
Step4&&Import the example project into the Eclipse development environment, as shown in 2. Development Environment Preparation.
Step5&&If the security service is enabled for the clusters, the following login modes are available:
● &&keytab mode: Obtain a Human-Machine user from an administrator for login to the FusionInsight Stream platform and authentication and a keytab file for the user.
● &&Ticket mode: Obtain a Human-Machine user from the administrator for subsequent secure login, enable the renewable and forwardable functions of the Kerberos service, and restart the cluster after the functions are enabled.
For details about how to obtain the user and a keytab file, see Administrator Guide.
The default validity period of a user password is 90 days, and therefore, the validity period of the obtained keytab file is 90 days. To prolong the validity period of the keytab file, modify user password policies and obtain the keytab file again. For details, see Administrator Guide.
The parameters for enabling the renewable and forwardable functions of the Kerberos service are on the System tab page of the Kerberos configuration page.
Step6&&Download the HBase client program. For details, see HBase Development Guide.
Step7&&Decompress the HBase client program downloaded to a local computer. After decompression, the core-site.xml, hdfs-site.xml, and hbase-site.xml files are generated in FusionInsight_VXXXRXXXCXXSPCXXX_Services_ClientConfig\HBase\config.
1. keytab mode: In this mode, a user must add the obtained configuration files and the keytab file to the conf folder of the local eclipse project and add the conf folder to build path.
2. Ticket mode: In this mode, a user must add the obtained configuration files to the conf folder of the local eclipse project .
The default name of the obtained keytab file is user.keytab. To change the name, a user can directly change the name but specify the new file name as a parameter when submitting a task.
Step9&&Log in to a Linux server, upload the downloaded HBase client installation package, and install the HBase client. For details, see HBase Development Guide.
Step10&&Copy related .jar packages to a specific blank directory on the Linux host on which a Streaming client is installed. For example, .jar packages of HBase must be remotely copied to /opt/jarsource on Host1 because the HBase client and Streaming client are installed on different hosts.& For details about how to obtain related jar packages, see Step11 to Step12.
Step11&&Find HBase/hbase/lib in the directory where the HBase client is installed and copy the following .jar packages to /opt/jarsource:
hbase-*.jarhadoop-*.jarhtrace-core-2.04.jarjackson-core-asl-1.9.13.jarjackson-mapper-asl-1.9.13.jarcommons-cli-1.2.jarcommons-collections-3.2.1.jarcommons-configuration-1.6.jarguava-12.0.1.jarprotobuf-java-2.5.0.jarnetty-all-4.0.23.Final.jar
Step12&&Find /Streaming/streaming-0.9.2/external/storm-hbase in the directory where the Streaming client is installed and copy storm-hbase*.jar in /Streaming/streaming-0.9.2/external/storm-hbase to /opt/jarsource.
Step13&&Use WinScp to copy all .jar packages in /opt/jarsource to the lib directory of the local eclipse project and add all .jar packages in the lib directory to build path. If different versions of a same .jar package exist, retain the later version.
Code Sample
Create a Topology.
&public&static&void&main(String[]&args)&throws&Exception&&&&&&{&&&&&//Add&the&plugin&required&for&kerberos&authentication&to&the&list.&The&security&mode&is&mandatory.&&&&&&&&&List&String&&auto_tgts&=&new&ArrayList&String&();&&&&&&&&&&&&&&&&&&//keytab&mode&&&&&&&&&auto_tgts.add("backtype.storm.security.auth.kerberos.AutoTGTFromKeytab");&&&&&&&&&//Ticket&mode,&not&recommended.&The&ticket&mode&and&keytab&mode&cannot&co-exist.&&&&&&&&&&//auto_tgts.add("backtype.storm.security.auth.kerberos.AutoTGT");&&&&&&&&&//auto_tgts.add("org.apache.storm.hbase.security.AutoHBase");&&&&&&&&&&&&&&&&&&Config&conf&=&new&Config();&&&&&&&&&//Write&the&plugin&configured&on&a&client&to&a&specific&config&item.&The&security&mode&is&mandatory.&&&&&&&&&&conf.put(Config.TOPOLOGY_AUTO_CREDENTIALS,&auto_tgts);&&&&&&&&&&&&&&&&&&if(args.length&&=&2)&&&&&&&&&{&&&&&&&&&//Change&the&default&keytab&file&name&and&specify&the&new&keytab&file&name&as&a&parameter.&&&&&&&&&conf.put(Config.STORM_CLIENT_KEYTAB_FILE,&args[1]);&&&&&&&&&}&&&&&&&&&//hbase&client&configuration.&Only&&&hbase.rootdir&&&configuration&item&is&provided,&which&is&optional.&&&&&&&&&Map&String,&Object&&hbConf&=&new&HashMap&String,&Object&();&&&&&&&&&if(args.length&&=&3)&&&&&&&&&{&&&&&&&&&hbConf.put("hbase.rootdir",&args[2]);&&&&&&&&&}&&&&&&&&&//Mandatory&parameter.&If&it&is&not&set,&it&is&left&blank.&&&&&&&&&&conf.put("hbase.conf",&hbConf);&&&&&&&&&&&&&&&&&&//spout&is&a&random&word.&&&&&&&&&WordSpout&spout&=&new&WordSpout();&&&&&&&&&WordCounter&bolt&=&new&WordCounter();&&&&&&&&&&//HbaseMapper,&for&parsing&tuple&content.&&&&&&&&&SimpleHBaseMapper&mapper&=&new&SimpleHBaseMapper()&&&&&&&&&&&&&&&&&.withRowKeyField("word")&&&&&&&&&&&&&&&&&.withColumnFields(new&Fields("word"))&&&&&&&&&&&&&&&&&.withCounterFields(new&Fields("count"))&&&&&&&&&&&&&&&&&.withColumnFamily("cf");&&&&&&&&&&//HBaseBolt,&the&first&parameter&is&a&table&name.&&&&&&&&&//withConfigKey("hbase.conf")&Transfer&hbase&client&configurations&to&HBaseBolt.&&&&&&&&&HBaseBolt&hbase&=&new&HBaseBolt("WordCount",&mapper).withConfigKey("hbase.conf");&&&&&&&&&&&//&wordSpout&==&&countBolt&==&&HBaseBolt&&&&&&&&&TopologyBuilder&builder&=&new&TopologyBuilder();&&&&&&&&&&builder.setSpout(WORD_SPOUT,&spout,&1);&&&&&&&&&builder.setBolt(COUNT_BOLT,&bolt,&1).shuffleGrouping(WORD_SPOUT);&&&&&&&&&builder.setBolt(HBASE_BOLT,&hbase,&1).fieldsGrouping(COUNT_BOLT,&new&Fields("word"));&&&&&&&&&//Submit&a&topology&by&using&a&CLI.&&&&&&&&&&StormSubmitter.submitTopology(args[0],&conf,&builder.createTopology());&}
Running Clients and Viewing Results
Export .jar packages. HBase-related .jar packages are used, and therefore, the packages cannot be exported by using the export function of Eclipse, but must be packaged by using the packaging tool provided by Streaming.
Export the .jar packages of the local eclipse project and use WinScp to copy the packages to /opt/jarsource. For details, see 5. Running an Application.
Use WinScp to& copy all configuration files except storm.yaml in the conf folder of the eclipse project to /opt/jarsource.
Figure Examples of .jar packages
Use the Streaming packaging tool to package the .jar packages. Find Streaming/streaming-0.9.2/bin in the Streaming client directory in Linux and run ./streaming-jartool.sh [input] [output]. The /opt/jarsource directory serves as an input directory, and any directory can serve as the output directory. In Windows, copy /opt/jarsource to a local computer and run streaming-jartool.cmd. In the specific output file directory, source.jar is generated.
If you run the command in the preceding step in Windows, use WinScp to copy the finally obtained .jar packages to a specific directory in Linux and set permission to 600.
For example, /home/example/streaming/source.jar.
chmod 600 /home/example/streaming/source.jar
Run commands in the streaming-0.9.2/bin directory in the Streaming installation directory to submit an application.
Run the following commands:
./storm jar /home/example/streaming/source.jar
com.huawei.streaming.storm.example.SimpleHBaseTopology hbase-test
To change the keytab file name to huawei.keytab in keytab mode, run the following commands:
./storm jar /home/example/streaming/source.jar
com.huawei.streaming.storm.example.SimpleHBaseTopology hbase-test huawei.keytab
Before submitting source.jar, verify that you have logged in in kerberos security mode and that the login user in keytab mode is the same as the user to which the uploaded keytab file belongs. For details about user security login, see Administrator Guide.
HBaseBolt in the preceding example does not provide the function for creating tables, and therefore, you must verify that necessary tables exist in HBase. If the tables do not exist, run the create 'WordCount', 'cf' statement to manually create HBase shell tables.
In HBase security mode, users must have the permission to access related tables, column families, or columns. Therefore, a user must log in to the HBase cluster as an HBase administrator, run the grant command in HBase shell to apply table access permission, such as WordCount, for the user, and then submit a user login request and a topology.
Run the ./storm list command to view the submitted application. If the hbase-test application is found, the task is successfully submitted.
After successfully submitting the topology, log in to the HBase cluster to view the topology.
To log in in ticket mode, perform the following operations to regularly upload a ticket. The interval for uploading the ticket depends on the deadline for updating the ticket:
&&&Add the following content to a new line at the end of /Streaming/streaming-0.9.2/conf/storm.yaml in the Streaming client installation directory:
topology.auto-credentials:
- backtype.storm.security.auth.kerberos.AutoTGT
&&&Run the ./storm upload-credentials hbase-test command.
8. Interfaces
Versions of interfaces adopted by Storm are consistent with those in Open Source Community. For details, see http://storm.apache.org/documentation/Home.html.
Versions of interfaces adopted by Storm-HDFS are consistent with those in Open Source Community. For details, see the following:
Versions of interfaces adopted by Storm-HBase are consistent with those in Open Source Community. For details, see the following:
Information For
Friendship Links
Please call the hot line 400-822-9999
Selects the re-development custom service groups
Supplies your company information as well as the question description
Please Choose Feedback Type
Login To Download

我要回帖

更多关于 phpstorm 价格 的文章

 

随机推荐