⑴ 如何用java程序把本地文件拷贝到hdfs上并显示进度
把程序打成jar包放到linux上
转到目录下执行命令 hadoop jar maprecer.jar /home/clq/export/java/count.jar hdfs://ubuntu:9000/out06/count/
上面一个是本地文件,一个是上传hdfs位置
成功后出现:打印出来,你所要打印的字符。
package com.clq.hdfs;
import java.io.BufferedInputStream;
import java.io.FileInputStream;
import java.io.IOException;
import java.io.InputStream;
import java.net.URI;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FSDataOutputStream;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IOUtils;
import org.apache.hadoop.util.Progressable;
public class FileCopyWithProgress {
//********************************
//把本地的一个文件拷贝到hdfs上
//********************************
public static void main(String[] args) throws IOException {
String localSrc = args[0];
String dst = args[1];
InputStream in = new BufferedInputStream(new FileInputStream(localSrc));
Configuration conf = new Configuration();
FileSystem fs = FileSystem.get(URI.create(dst), conf);
FSDataOutputStream out = fs.create(new Path(dst), new Progressable() {
@Override
public void progress() {
System.out.print(".");
}
});
IOUtils.Bytes(in, out, conf, true);
}
}
可能出现异常:
Exception in thread "main" org.apache.hadoop.ipc.RemoteException: java.io.IOException: Cannot create /out06; already exists as a directory
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:1569)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:1527)
at org.apache.hadoop.hdfs.server.namenode.NameNode.create(NameNode.java:710)
at org.apache.hadoop.hdfs.server.namenode.NameNode.create(NameNode.java:689)
at sun.reflect.GeneratedMethodAccessor7.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:587)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1432)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1428)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
说明你这个路径在hdfs上已经存在,换一个即可。
⑵ 怎么使用java代码直接将从外部拿到的数据存入hdfs
存入HDFS有好几种数据格式,我这里给你列出一种格式的存储,sequence的
publicclassSeqWrite{
privatestaticfinalString[]data={"a,b,c,d,e,f,g","h,i,j,k,l,m,n","o,p,q,r,s,t","u,v,w,x,y,z","0,1,2,3,4","5,6,7,8,9"};
publicstaticvoidmain(String[]args)throwsIOException,Exception{
Configurationconfiguration=newConfiguration();
//这里是你主机的地址
configuration.set("fs.defaultFS","192.168.51.140");
//这个是存储的路径
Pathpath=newPath("/tmp/test1.seq");
Optionoption=SequenceFile.Writer.file(path);
OptionoptKey=SequenceFile.Writer.keyClass(IntWritable.class);
OptionoptValue=SequenceFile.Writer.valueClass(Text.class);
SequenceFile.Writerwriter=null;
IntWritablekey=newIntWritable(10);
Textvalue=newText();
writer=SequenceFile.createWriter(configuration,option,optKey,optValue);
for(inti=0;i<data.length;i++){
key.set(i);
value.set(data[i]);
writer.append(key,value);
writer.hsync();
Thread.sleep(10000L);
}
IOUtils.closeStream(writer);
}
}
⑶ 用java向hdfs上传文件时,如何实现断点续传
@Component("javaLargeFileUploaderServlet")
@WebServlet(name = "javaLargeFileUploaderServlet", urlPatterns = { "/javaLargeFileUploaderServlet" })
public class UploadServlet extends HttpRequestHandlerServlet
implements HttpRequestHandler {
private static final Logger log = LoggerFactory.getLogger(UploadServlet.class);
@Autowired
UploadProcessor uploadProcessor;
@Autowired
FileUploaderHelper fileUploaderHelper;
@Autowired
ExceptionCodeMappingHelper exceptionCodeMappingHelper;
@Autowired
Authorizer authorizer;
@Autowired
StaticStateIdentifierManager staticStateIdentifierManager;
@Override
public void handleRequest(HttpServletRequest request, HttpServletResponse response)
throws IOException {
log.trace("Handling request");
Serializable jsonObject = null;
try {
// extract the action from the request
UploadServletAction actionByParameterName =
UploadServletAction.valueOf(fileUploaderHelper.getParameterValue(request, UploadServletParameter.action));
// check authorization
checkAuthorization(request, actionByParameterName);
// then process the asked action
jsonObject = processAction(actionByParameterName, request);
// if something has to be written to the response
if (jsonObject != null) {
fileUploaderHelper.writeToResponse(jsonObject, response);
}
}
// If exception, write it
catch (Exception e) {
exceptionCodeMappingHelper.processException(e, response);
}
}
private void checkAuthorization(HttpServletRequest request, UploadServletAction actionByParameterName)
throws MissingParameterException, AuthorizationException {
// check authorization
// if its not get progress (because we do not really care about authorization for get
// progress and it uses an array of file ids)
if (!actionByParameterName.equals(UploadServletAction.getProgress)) {
// extract uuid
final String fileIdFieldValue = fileUploaderHelper.getParameterValue(request, UploadServletParameter.fileId, false);
// if this is init, the identifier is the one in parameter
UUID clientOrJobId;
String parameter = fileUploaderHelper.getParameterValue(request, UploadServletParameter.clientId, false);
if (actionByParameterName.equals(UploadServletAction.getConfig) && parameter != null) {
clientOrJobId = UUID.fromString(parameter);
}
// if not, get it from manager
else {
clientOrJobId = staticStateIdentifierManager.getIdentifier();
}
// call authorizer
authorizer.getAuthorization(
request,
actionByParameterName,
clientOrJobId,
fileIdFieldValue != null ? getFileIdsFromString(fileIdFieldValue).toArray(new UUID[] {}) : null);
}
}
private Serializable processAction(UploadServletAction actionByParameterName, HttpServletRequest request)
throws Exception {
log.debug("Processing action " + actionByParameterName.name());
Serializable returnObject = null;
switch (actionByParameterName) {
case getConfig:
String parameterValue = fileUploaderHelper.getParameterValue(request, UploadServletParameter.clientId, false);
returnObject =
uploadProcessor.getConfig(
parameterValue != null ? UUID.fromString(parameterValue) : null);
break;
case verifyCrcOfUncheckedPart:
returnObject = verifyCrcOfUncheckedPart(request);
break;
case prepareUpload:
returnObject = prepareUpload(request);
break;
case clearFile:
uploadProcessor.clearFile(UUID.fromString(fileUploaderHelper.getParameterValue(request, UploadServletParameter.fileId)));
break;
case clearAll:
uploadProcessor.clearAll();
break;
case pauseFile:
List<UUID> uuids = getFileIdsFromString(fileUploaderHelper.getParameterValue(request, UploadServletParameter.fileId));
uploadProcessor.pauseFile(uuids);
break;
case resumeFile:
returnObject =
uploadProcessor.resumeFile(UUID.fromString(fileUploaderHelper.getParameterValue(request, UploadServletParameter.fileId)));
break;
case setRate:
uploadProcessor.setUploadRate(UUID.fromString(fileUploaderHelper.getParameterValue(request, UploadServletParameter.fileId)),
Long.valueOf(fileUploaderHelper.getParameterValue(request, UploadServletParameter.rate)));
break;
case getProgress:
returnObject = getProgress(request);
break;
}
return returnObject;
}
List<UUID> getFileIdsFromString(String fileIds) {
String[] splittedFileIds = fileIds.split(",");
List<UUID> uuids = Lists.newArrayList();
for (int i = 0; i < splittedFileIds.length; i++) {
uuids.add(UUID.fromString(splittedFileIds[i]));
}
return uuids;
}
private Serializable getProgress(HttpServletRequest request)
throws MissingParameterException {
Serializable returnObject;
String[] ids =
new Gson()
.fromJson(fileUploaderHelper.getParameterValue(request, UploadServletParameter.fileId), String[].class);
Collection<UUID> uuids = Collections2.transform(Arrays.asList(ids), new Function<String, UUID>() {
@Override
public UUID apply(String input) {
return UUID.fromString(input);
}
});
returnObject = Maps.newHashMap();
for (UUID fileId : uuids) {
try {
ProgressJson progress = uploadProcessor.getProgress(fileId);
((HashMap<String, ProgressJson>) returnObject).put(fileId.toString(), progress);
}
catch (FileNotFoundException e) {
log.debug("No progress will be retrieved for " + fileId + " because " + e.getMessage());
}
}
return returnObject;
}
private Serializable prepareUpload(HttpServletRequest request)
throws MissingParameterException, IOException {
// extract file information
PrepareUploadJson[] fromJson =
new Gson()
.fromJson(fileUploaderHelper.getParameterValue(request, UploadServletParameter.newFiles), PrepareUploadJson[].class);
// prepare them
final HashMap<String, UUID> prepareUpload = uploadProcessor.prepareUpload(fromJson);
// return them
return Maps.newHashMap(Maps.transformValues(prepareUpload, new Function<UUID, String>() {
public String apply(UUID input) {
return input.toString();
};
}));
}
private Boolean verifyCrcOfUncheckedPart(HttpServletRequest request)
throws IOException, MissingParameterException, FileCorruptedException, FileStillProcessingException {
UUID fileId = UUID.fromString(fileUploaderHelper.getParameterValue(request, UploadServletParameter.fileId));
try {
uploadProcessor.verifyCrcOfUncheckedPart(fileId,
fileUploaderHelper.getParameterValue(request, UploadServletParameter.crc));
}
catch (InvalidCrcException e) {
// no need to log this exception, a fallback behaviour is defined in the
// throwing method.
// but we need to return something!
return Boolean.FALSE;
}
return Boolean.TRUE;
}
}
⑷ 用Java的API操作HDFS,即进行简单的上传和下载的时候会出现权限问题
关闭hdfs权限管理。在hdfs-site.xml中添加下面的配置
<property>
<name>dfs.permissions</name>
<value>false</value>
</property>
⑸ java怎么连接hdfs文件系统,需要哪些包
apache的Hadoop项目提供一类api可以通过java工程操作hdfs中的文件,包括:文件打开,读写,删除等、目录的创建,删除,读取目录中所有文件等。
1、到http://hadoop.apache.org/releases.html下载Hadoop,解压后把所有jar加入项目的lib里
2、程序处理步骤: 1)得到Configuration对象,2)得到FileSystem对象,3)进行文件操作,简单示例如下:
/**
*
*/
package org.jrs.wlh;
import java.io.IOException;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FSDataInputStream;
import org.apache.hadoop.fs.FSDataOutputStream;
import org.apache.hadoop.fs.FileStatus;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
/**
* @PutMeger.java
* java操作hdfs 往 hdfs中上传数据
* @version $Revision$</br>
* update: $Date$
*/
public class PutMeger {
public static void main(String[] args) throws IOException {
String[] str = new String[]{"E:\\hadoop\\UploadFileClient.java","hdfs://master:9000/user/hadoop/inccnt.java"};
Configuration conf = new Configuration();
FileSystem fileS= FileSystem.get(conf);
FileSystem localFile = FileSystem.getLocal(conf); //得到一个本地的FileSystem对象
Path input = new Path(str[0]); //设定文件输入保存路径
Path out = new Path(str[1]); //文件到hdfs输出路径
try{
FileStatus[] inputFile = localFile.listStatus(input); //listStatus得到输入文件路径的文件列表
FSDataOutputStream outStream = fileS.create(out); //创建输出流
for (int i = 0; i < inputFile.length; i++) {
System.out.println(inputFile[i].getPath().getName());
FSDataInputStream in = localFile.open(inputFile[i].getPath());
byte buffer[] = new byte[1024];
int bytesRead = 0;
while((bytesRead = in.read(buffer))>0){ //按照字节读取数据
System.out.println(buffer);
outStream.write(buffer,0,bytesRead);
}
in.close();
}
}catch(Exception e){
e.printStackTrace();
}
}
}
⑹ eclipse(java api)操作hadoop hdfs,我试图将本地文件拷贝进hdfs,目标却是本地文件系统,不是hdfs。
恭喜啊,学习hadoop需要先学好命令啊。
想学Linux命令就去www.linuxsky.cn,里面也可以学习脚本和svn的命令,哈哈
⑺ 关于用java写程序把本地文件上传到HDFS中的问题
将这FileSystem hdfs = FileSystem.get(config);
改成FileSystem hdfs = FileSystem.get(URI.create("hdfs://master:9000"),config)
上面那句取得的是本地文件系统对象,改成下面这个才是取得hdfs文件系统对象,当你要操作本地文件对象的时候就要用上面那句取得本地文件对象,我在2.7.4刚开始也是跟你一样的错误,改为下面的就可以了
⑻ 如何使用Java API读写HDFS
HDFS是Hadoop生态系统的根基,也是Hadoop生态系统中的重要一员,大部分时候,我们都会使用Linux shell命令来管理HDFS,包括一些文件的创建,删除,修改,上传等等,因为使用shell命令操作HDFS的方式,相对比较简单,方便,但是有时候,我们也需要通过编程的方式来实现对文件系统的管理。
比如有如下的一个小需求,要求我们实现读取HDFS某个文件夹下所有日志,经过加工处理后在写入到HDFS上,或者存进Hbase里,或者存进其他一些存储系统。这时候使用shell的方式就有点麻烦了,所以这时候我们就可以使用编程的方式来完成这件事了,当然散仙在这里使用的是原生的Java语言的方式,其他的一些语言例如C++,PHP,Python都可以实现,散仙在这里不给出演示了,(其实散仙也不会那些语言,除了刚入门的Python) 。
下面,散仙给出代码,以供参考:
view sourceprint?
package com.java.api.hdfs;
import java.io.BufferedReader;
import java.io.IOException;
import java.io.InputStream;
import java.io.InputStreamReader
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FileStatus;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
/**
* @author 三劫散仙
* Java API操作HDFS
* 工具类
*
* **/
public class OperaHDFS {
public static void main(String[] args)throws Exception {
//System.out.println("aaa");
// uploadFile();
//createFileOnHDFS();
//deleteFileOnHDFS();
//createDirectoryOnHDFS();
//deleteDirectoryOnHDFS();
// renameFileOrDirectoryOnHDFS();
readHDFSListAll();
}
/***
* 加载配置文件
* **/
static Configuration conf=new Configuration();
/**
* 重名名一个文件夹或者文件
public static void renameFileOrDirectoryOnHDFS()throws Exception{
FileSystem fs=FileSystem.get(conf);
Path p1 =new Path("hdfs://10.2.143.5:9090/root/myfile/my.txt");
fs.rename(p1, p2);
System.out.println("重命名文件夹或文件成功.....");
}
/***
*
* 读取HDFS某个文件夹的所有
* 文件,并打印
*
* **/
public static void readHDFSListAll() throws Exception{
//流读入和写入
InputStream in=null;
//获取HDFS的conf
//读取HDFS上的文件系统
FileSystem hdfs=FileSystem.get(conf);
//使用缓冲流,进行按行读取的功能
BufferedReader buff=null;
//获取日志文件的根目录
Path listf =new Path("hdfs://10.2.143.5:9090/root/myfile/");
//获取根目录下的所有2级子文件目录
FileStatus stats[]=hdfs.listStatus(listf);
//自定义j,方便查看插入信息
int j=0;
for(int i = 0; i < stats.length; i++){
//获取子目录下的文件路径
FileStatus temp[]=hdfs.listStatus(new Path(stats[i].getPath().toString()));
for(int k = 0; k < temp.length;k++){
System.out.println("文件路径名:"+temp[k].getPath().toString());
//获取Path
Path p=new Path(temp[k].getPath().toString());
//打开文件流 in=hdfs.open(p);
//BufferedReader包装一个流
buff=new BufferedReader(new InputStreamReader(in));
String str=null;
while((str=buff.readLine())!=null){
System.out.println(str);
}
buff.close();
in.close();
}
}
hdfs.close();
}
/**
* 从HDFS上下载文件或文件夹到本地
*
* **/
public static void downloadFileorDirectoryOnHDFS()throws Exception{
FileSystem fs=FileSystem.get(conf);
Path p1 =new Path("hdfs://10.2.143.5:9090/root/myfile//my2.txt");
Path p2 =new Path("D://7");
fs.ToLocalFile(p1, p2);
fs.close();//释放资源
}
/**
* 在HDFS上创建一个文件夹
*
* **/
public static void createDirectoryOnHDFS()throws Exception
FileSystem fs=FileSystem.get(conf);
Path p =new Path("hdfs://10.2.143.5:9090/root/myfile");
fs.close();//释放资源
System.out.println("创建文件夹成功.....");
}
/**
* 在HDFS上删除一个文件夹
*
* **/
public static void deleteDirectoryOnHDFS()throws Exception{
FileSystem fs=FileSystem.get(conf);
Path p =new Path("hdfs://10.2.143.5:9090/root/myfile");
fs.close();//释放资源
System.out.println("删除文件夹成功.....");
}
/**
* 在HDFS上创建一个文件
*
* **/
public static void createFileOnHDFS()throws Exception{
FileSystem fs=FileSystem.get(conf);
Path p =new Path("hdfs://10.2.143.5:9090/root/abc.txt");
fs.createNewFile(p);
//fs.create(p);
fs.close();//释放资源
System.out.println("创建文件成功.....");
}
/**
* 在HDFS上删除一个文件
*
* **/
public static void deleteFileOnHDFS()throws Exception{
FileSystem fs=FileSystem.get(conf);
Path p =new Path("hdfs://10.2.143.5:9090/root/abc.txt");
fs.deleteOnExit(p);
fs.close();//释放资源
System.out.println("删除成功.....");
}
/***
* 上传本地文件到
* HDFS上
*
* **/
public static void uploadFile()throws Exception{
//加载默认配置
FileSystem fs=FileSystem.get(conf);
//本地文件
Path src =new Path("D:\\6");
//HDFS为止
Path dst =new Path("hdfs://10.2.143.5:9090/root/");
try {
fs.FromLocalFile(src, dst);
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
System.out.println("上传成功........");
fs.close();//释放资源
}
}
⑼ 刚学习spark,想上传文件给hdfs,是不是需要hadoop然后java编程这样是用eclip
spark会把hdfs当做一个数据源来处理, 所以数据存储都要做, 之后编程是从Hadoop改成spark就可以了. 是否用eclipse无所谓, 只要能编译运行就可以
⑽ hadoop上传文件出错:java.io.IOException: Mkdirs failed to create /user/hadoop
不了解Hadoop,不过看错误是创建文件夹失败,最好检查下权限问题