Android 框架:有没有什么比较成熟的Camera框架?

有没有比较成熟的Android开源MVC框架_百度知道
有没有比较成熟的Android开源MVC框架
提问者采纳
  您高兴您解答:  我觉于自代码编写问题且Activity本控制层所面量代码非造轮问题于并没公共代码剥离  1. Android 没传统意义 MVC 框架  2. 完整 MVC 框架于 Android 运行资源受限设备系统否必要  3. 我更倾向于 Activity 绘制 UI 并响应相关事件业务逻辑则更另外线程或者 Service 进行既避免 Activity 现量代码同避免现 UI 阻塞情况  我答没能帮助您请继续追问
资深电脑人
其他类似问题
为您推荐:
mvc框架的相关知识
等待您来回答
下载知道APP
随时随地咨询
出门在外也不愁14204人阅读
一、在android中开发人员可以做那些工作?
& & & & 应用程序开发:利用android提供的强大的sdk,开发出各种各样新颖的应用。
& & & & 系统开发:在android中Google实现了与硬件无关的所有代码,但是与硬件密切相关的硬件抽象层却没有也无法提供,对于移动设备不同的设备提供商底层硬件是千变万化的,不可能提供统一的硬件驱动以及接口实现,只能提供标准的接口,因此硬件提供商需要自个儿开发设备驱动,
并去实现android框架提供的接口。
二、android框架中Camera系统源码分析
& & & &在每个android手机中都有一个Camera应用程序用来实现拍照功能,不同硬件提供商可能会对这个应用程序进行改变来适合自己的UI风格,
这里仅仅分析android原生Camera应用以及框架(Android 4.0)
& & & 原生Camera应用代码在Camera.java(android4.0\packages\apps\camera\src\com\android\camera),这个应该算是Camera系统最上层,应用层的实现。
& & & 下面是Camera类部分代码
public class Camera extends ActivityBase implements FocusManager.Listener,
View.OnTouchListener, ShutterButton.OnShutterButtonListener,
SurfaceHolder.Callback, ModePicker.OnModeChangeListener,
FaceDetectionListener, CameraPreference.OnPreferenceChangedListener,
LocationManager.Listener, ShutterButton.OnShutterButtonLongPressListener& & & 从上面可以看出,Camera在继承了很多监听接口,用来监听各种事件(对焦事件、用户触摸事件等)。这个应用时继承ActivityBase,
可以重载OnCreate、OnResume等接口,在这些接口中完成相关初始化的工作,基本就是初始化各种监听对象,以及获取相机参数等相关。
& && 比较关键的在&doOnResume这个函数中:
protected void doOnResume() {
if (mOpenCameraFail || mCameraDisabled)
mPausing =
mJpegPictureCallbackTime = 0;
mZoomValue = 0;
// Start the preview if it is not started.
if (mCameraState == PREVIEW_STOPPED) {
mCameraDevice = Util.openCamera(this, mCameraId);
initializeCapabilities();
resetExposureCompensation();
startPreview();
if (mFirstTimeInitialized) startFaceDetection();
} catch (CameraHardwareException e) {
Util.showErrorAndFinish(this, R.string.cannot_connect_camera);
} catch (CameraDisabledException e) {
Util.showErrorAndFinish(this, R.string.camera_disabled);
if (mSurfaceHolder != null) {
// If first time initialization is not finished, put it in the
// message queue.
if (!mFirstTimeInitialized) {
mHandler.sendEmptyMessage(FIRST_TIME_INIT);
initializeSecondTime();
keepScreenOnAwhile();
if (mCameraState == IDLE) {
mOnResumeTime = SystemClock.uptimeMillis();
mHandler.sendEmptyMessageDelayed(CHECK_DISPLAY_ROTATION, 100);
}在这个函数中看到通过这个函数获得Camera底层对象
mCameraDevice = Util.openCamera(this, mCameraId),这里使用Util这个类,这个类的实现在
Util.java (android4.0\packages\apps\camera\src\com\android\camera)中,找到OpenCamera这个函数实现:
public static android.hardware.Camera openCamera(Activity activity, int cameraId)
throws CameraHardwareException, CameraDisabledException {
// Check if device policy has disabled the camera.
DevicePolicyManager dpm = (DevicePolicyManager) activity.getSystemService(
Context.DEVICE_POLICY_SERVICE);
if (dpm.getCameraDisabled(null)) {
throw new CameraDisabledException();
return CameraHolder.instance().open(cameraId);
} catch (CameraHardwareException e) {
// In eng build, we throw the exception so that test tool
// can detect it and report it
if (&eng&.equals(Build.TYPE)) {
throw new RuntimeException(&openCamera failed&, e);
}从这个函数可以看出,android系统中对下层Camera管理,是通过一个单例模式CameraHolder来管理的,
定位到这个类的实现CameraHolder.java (android4.0\packages\apps\camera\src\com\android\camera)通过调用open函数获取一个Camera硬件设备对象,
因为Camera设备是独享设备,不能同时被两个进程占用,而整个android系统是一个多进程环境,因此需要加入一些进程间互斥同步的方法。
定位到这个类的open函数:
public synchronized android.hardware.Camera open(int cameraId)
throws CameraHardwareException {
Assert(mUsers == 0);
if (mCameraDevice != null && mCameraId != cameraId) {
mCameraDevice.release();
mCameraDevice =
mCameraId = -1;
if (mCameraDevice == null) {
Log.v(TAG, &open camera & + cameraId);
mCameraDevice = android.hardware.Camera.open(cameraId);
mCameraId = cameraId;
} catch (RuntimeException e) {
Log.e(TAG, &fail to connect Camera&, e);
throw new CameraHardwareException(e);
mParameters = mCameraDevice.getParameters();
mCameraDevice.reconnect();
} catch (IOException e) {
Log.e(TAG, &reconnect failed.&);
throw new CameraHardwareException(e);
mCameraDevice.setParameters(mParameters);
mHandler.removeMessages(RELEASE_CAMERA);
mKeepBeforeTime = 0;
return mCameraD
}通过android.hardware.Camera.open(cameraId)调用进入下一层封装,JNI层,这一层是java代码的最下层,对下层CameraC++代码进行JNI封装,封装实现类在Camera.java (android4.0\frameworks\base\core\java\android\hardware) 下面是这个类的部分实现,里面定义了不少回调函数:public class Camera {
private static final String TAG = &Camera&;
// These match the enums in frameworks/base/include/camera/Camera.h
private static final int CAMERA_MSG_ERROR
private static final int CAMERA_MSG_SHUTTER
private static final int CAMERA_MSG_FOCUS
private static final int CAMERA_MSG_ZOOM
private static final int CAMERA_MSG_PREVIEW_FRAME
private static final int CAMERA_MSG_VIDEO_FRAME
private static final int CAMERA_MSG_POSTVIEW_FRAME
private static final int CAMERA_MSG_RAW_IMAGE
private static final int CAMERA_MSG_COMPRESSED_IMAGE = 0x100;
private static final int CAMERA_MSG_RAW_IMAGE_NOTIFY = 0x200;
private static final int CAMERA_MSG_PREVIEW_METADATA = 0x400;
private static final int CAMERA_MSG_ALL_MSGS
private int mNativeC // accessed by native methods
private EventHandler mEventH
private ShutterCallback mShutterC
private PictureCallback mRawImageC
private PictureCallback mJpegC
private PreviewCallback mPreviewC
private PictureCallback mPostviewC
private AutoFocusCallback mAutoFocusC
private OnZoomChangeListener mZoomL
private FaceDetectionListener mFaceL
private ErrorCallback mErrorC定位到Open函数:
& & public static Camera open(int cameraId) {
& & & & return new Camera(cameraId);
Open函数是一个静态方法,构造一个Camera对象:
Camera(int cameraId) {
mShutterCallback =
mRawImageCallback =
mJpegCallback =
mPreviewCallback =
mPostviewCallback =
mZoomListener =
if ((looper = Looper.myLooper()) != null) {
mEventHandler = new EventHandler(this, looper);
} else if ((looper = Looper.getMainLooper()) != null) {
mEventHandler = new EventHandler(this, looper);
mEventHandler =
native_setup(new WeakReference&Camera&(this), cameraId);
在构造函数中调用native_setup方法,此方法对应于C++代码的android_hardware_Camera_native_setup方法,
实现在android_hardware_Camera.cpp (android4.0\frameworks\base\core\jni),具体代码如下:
static void android_hardware_Camera_native_setup(JNIEnv *env, jobject thiz,
jobject weak_this, jint cameraId)
sp&Camera& camera = Camera::connect(cameraId);
if (camera == NULL) {
jniThrowRuntimeException(env, &Fail to connect to camera service&);
// make sure camera hardware is alive
if (camera-&getStatus() != NO_ERROR) {
jniThrowRuntimeException(env, &Camera initialization failed&);
jclass clazz = env-&GetObjectClass(thiz);
if (clazz == NULL) {
jniThrowRuntimeException(env, &Can't find android/hardware/Camera&);
// We use a weak reference so the Camera object can be garbage collected.
// The reference is only used as a proxy for callbacks.
sp&JNICameraContext& context = new JNICameraContext(env, weak_this, clazz, camera);
context-&incStrong(thiz);
camera-&setListener(context);
// save context in opaque field
env-&SetIntField(thiz, fields.context, (int)context.get());
在android_hardware_Camera_native_setup方法中调用了Camera对象的connect方法,这个Camera类的声明在Camera.h (android4.0\frameworks\base\include\camera)
定位到connect方法:sp&Camera& Camera::connect(int cameraId)
LOGV(&connect&);
sp&Camera& c = new Camera();
const sp&ICameraService&& cs = getCameraService();
if (cs != 0) {
c-&mCamera = cs-&connect(c, cameraId);
if (c-&mCamera != 0) {
c-&mCamera-&asBinder()-&linkToDeath(c);
c-&mStatus = NO_ERROR;
c.clear();
}这里以下的代码就比较关键了,涉及到Camera框架的实现机制,Camera系统使用的是Server-Client机制,Service和Client位于不同的进程中,进程间使用Binder机制进行通信,
Service端实际实现相机相关的操作,Client端通过Binder接口调用Service对应的操作。
继续分析代码,上面函数调用getCameraService方法,获得CameraService的引用,ICameraService有两个子类,BnCameraService和BpCameraService,这两个子类同时也
继承了IBinder接口,这两个子类分别实现了Binder通信的两端,Bnxxx实现ICameraService的具体功能,Bpxxx利用Binder的通信功能封装ICameraService方法,具体如下:
class ICameraService : public IInterface
GET_NUMBER_OF_CAMERAS = IBinder::FIRST_CALL_TRANSACTION,
GET_CAMERA_INFO,
DECLARE_META_INTERFACE(CameraService);
virtual int32_t
getNumberOfCameras() = 0;
virtual status_t
getCameraInfo(int cameraId,
struct CameraInfo* cameraInfo) = 0;
virtual sp&ICamera&
connect(const sp&ICameraClient&& cameraClient,
int cameraId) = 0;
// ----------------------------------------------------------------------------
class BnCameraService: public BnInterface&ICameraService&
virtual status_t
onTransact( uint32_t code,
const Parcel& data,
Parcel* reply,
uint32_t flags = 0);
}; // naclass BpCameraService: public BpInterface&ICameraService&
BpCameraService(const sp&IBinder&& impl)
: BpInterface&ICameraService&(impl)
// get number of cameras available
virtual int32_t getNumberOfCameras()
Parcel data,
data.writeInterfaceToken(ICameraService::getInterfaceDescriptor());
remote()-&transact(BnCameraService::GET_NUMBER_OF_CAMERAS, data, &reply);
return reply.readInt32();
// get information about a camera
virtual status_t getCameraInfo(int cameraId,
struct CameraInfo* cameraInfo) {
Parcel data,
data.writeInterfaceToken(ICameraService::getInterfaceDescriptor());
data.writeInt32(cameraId);
remote()-&transact(BnCameraService::GET_CAMERA_INFO, data, &reply);
cameraInfo-&facing = reply.readInt32();
cameraInfo-&orientation = reply.readInt32();
return reply.readInt32();
// connect to camera service
virtual sp&ICamera& connect(const sp&ICameraClient&& cameraClient, int cameraId)
Parcel data,
data.writeInterfaceToken(ICameraService::getInterfaceDescriptor());
data.writeStrongBinder(cameraClient-&asBinder());
data.writeInt32(cameraId);
remote()-&transact(BnCameraService::CONNECT, data, &reply);
return interface_cast&ICamera&(reply.readStrongBinder());
IMPLEMENT_META_INTERFACE(CameraService, &android.hardware.ICameraService&);
// ----------------------------------------------------------------------
status_t BnCameraService::onTransact(
uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags)
switch(code) {
case GET_NUMBER_OF_CAMERAS: {
CHECK_INTERFACE(ICameraService, data, reply);
reply-&writeInt32(getNumberOfCameras());
return NO_ERROR;
case GET_CAMERA_INFO: {
CHECK_INTERFACE(ICameraService, data, reply);
CameraInfo cameraI
memset(&cameraInfo, 0, sizeof(cameraInfo));
status_t result = getCameraInfo(data.readInt32(), &cameraInfo);
reply-&writeInt32(cameraInfo.facing);
reply-&writeInt32(cameraInfo.orientation);
reply-&writeInt32(result);
return NO_ERROR;
case CONNECT: {
CHECK_INTERFACE(ICameraService, data, reply);
sp&ICameraClient& cameraClient = interface_cast&ICameraClient&(data.readStrongBinder());
sp&ICamera& camera = connect(cameraClient, data.readInt32());
reply-&writeStrongBinder(camera-&asBinder());
return NO_ERROR;
return BBinder::onTransact(code, data, reply, flags);
// ----------------------------------------------------------------------------
}; // namespace android
下面继续分析sp&Camera& Camera::connect(int cameraId)这个方法,,定位到getCameraService这个方法
const sp&ICameraService&& Camera::getCameraService()
Mutex::Autolock _l(mLock);
if (mCameraService.get() == 0) {
sp&IServiceManager& sm = defaultServiceManager();
sp&IBinder&
binder = sm-&getService(String16(&media.camera&));
if (binder != 0)
LOGW(&CameraService not published, waiting...&);
usleep(500000); // 0.5 s
} while(true);
if (mDeathNotifier == NULL) {
mDeathNotifier = new DeathNotifier();
binder-&linkToDeath(mDeathNotifier);
mCameraService = interface_cast&ICameraService&(binder);
LOGE_IF(mCameraService==0, &no CameraService!?&);
return mCameraS
}定位到mCameraService = interface_cast&ICameraService&(binder); mCameraService是一个ICamerService类型,更加具体具体一点来讲应该是BpCameraService,
因为在这个类中实现了ICameraService的方法。
总结上面Binder机制,仅仅考虑分析Binder用法,对底层实现不进行深究,基本步骤如下:
1.定义进程间通信的接口比如这里的ICameraService;
2.在BnCameraService和BpCamaraService实现这个接口,这两个接口也分别继承于BnInterface和BpInterface;
3.服务端向ServiceManager注册Binder,客户端向ServiceManager获得Binder;
4.然后就可以实现双向进程间通信了;
通过getCameraService得到ICameraService引用后,调用ICameraService的connect方法获得ICamera引用,
c-&mCamera = cs-&connect(c, cameraId);进一步跟进connect方法,这里就是BpCameraService类中connect方法的具体实现。
virtual sp&ICamera& connect(const sp&ICameraClient&& cameraClient, int cameraId)
Parcel data,
data.writeInterfaceToken(ICameraService::getInterfaceDescriptor());
data.writeStrongBinder(cameraClient-&asBinder());
data.writeInt32(cameraId);
remote()-&transact(BnCameraService::CONNECT, data, &reply);
return interface_cast&ICamera&(reply.readStrongBinder());
}在这里返回的ICamera对象,实际上应该是BpCamera对象,这里使用的是匿名Binder,前面获取CameraService的使用的有名Binder,有名Binder需要借助于ServiceManager获取Binder,而匿名Binder可以通过已经建立后的通信通道(有名Binder)获得。以上是实现Camera框架部分,具体的实现Camera相关的方法是在ICamera相关的接口,下面是给接口的定义:class ICamera: public IInterface
DECLARE_META_INTERFACE(Camera);
virtual void
disconnect() = 0;
// connect new client with existing camera remote
virtual status_t
connect(const sp&ICameraClient&& client) = 0;
// prevent other processes from using this ICamera interface
virtual status_t
lock() = 0;
// allow other processes to use this ICamera interface
virtual status_t
unlock() = 0;
// pass the buffered Surface to the camera service
virtual status_t
setPreviewDisplay(const sp&Surface&& surface) = 0;
// pass the buffered ISurfaceTexture to the camera service
virtual status_t
setPreviewTexture(
const sp&ISurfaceTexture&& surfaceTexture) = 0;
// set the preview callback flag to affect how the received frames from
// preview are handled.
virtual void
setPreviewCallbackFlag(int flag) = 0;
// start preview mode, must call setPreviewDisplay first
virtual status_t
startPreview() = 0;
// stop preview mode
virtual void
stopPreview() = 0;
// get preview state
virtual bool
previewEnabled() = 0;
// start recording mode
virtual status_t
startRecording() = 0;
// stop recording mode
virtual void
stopRecording() = 0;
// get recording state
virtual bool
recordingEnabled() = 0;
// release a recording frame
virtual void
releaseRecordingFrame(const sp&IMemory&& mem) = 0;
// auto focus
virtual status_t
autoFocus() = 0;
// cancel auto focus
virtual status_t
cancelAutoFocus() = 0;
* take a picture.
* @param msgType the message type an application selectively turn on/off
* on a photo-by-photo basis. The supported message types are:
* CAMERA_MSG_SHUTTER, CAMERA_MSG_RAW_IMAGE, CAMERA_MSG_COMPRESSED_IMAGE,
* and CAMERA_MSG_POSTVIEW_FRAME. Any other message types will be ignored.
virtual status_t
takePicture(int msgType) = 0;
// set preview/capture parameters - key/value pairs
virtual status_t
setParameters(const String8& params) = 0;
// get preview/capture parameters - key/value pairs
virtual String8
getParameters() const = 0;
// send command to camera driver
virtual status_t
sendCommand(int32_t cmd, int32_t arg1, int32_t arg2) = 0;
// tell the camera hal to store meta data or real YUV data in video buffers.
virtual status_t
storeMetaDataInBuffers(bool enabled) = 0;
ICamera接口有两个子类BnCamera和BpCamera,是Binder通信的两端,BpCamera提供客户端调用接口,BnCamera封装具体的实现,BnCamera也并没有真正实现ICamera相关接口而是在BnCamera子类CameraService::Client中进行实现。而在CameraService::Client类中会继续调用硬件抽象层中相关方法来具体实现Camera功能,
现在来缕一缕android中Camera各个类如何联系的
。。。。未完
* 以上用户言论只代表其个人观点,不代表CSDN网站的观点或立场
访问:363084次
积分:5004
积分:5004
排名:第2905名
原创:125篇
转载:47篇
评论:150条
(1)(1)(1)(1)(1)(4)(3)(11)(8)(6)(2)(5)(3)(2)(3)(2)(1)(1)(1)(1)(4)(8)(2)(2)(1)(1)(2)(7)(2)(3)(1)(9)(7)(10)(1)(2)(5)(5)(26)(16)有没有比较成熟的Android开源MVC框架_百度知道
有没有比较成熟的Android开源MVC框架
提问者采纳
  1. Android 没传统意义 MVC 框架  2. 完整 MVC 框架于 Android 运行资源受限设备系统否必要  3. 我更倾向于 Activity 绘制 UI 并响应相关事件业务逻辑则更另外线程或者 Service 进行既避免 Activity 现量代码同避免现 UI 阻塞情况
来自团队:
其他类似问题
为您推荐:
等待您来回答
下载知道APP
随时随地咨询
出门在外也不愁有没有比较成熟的Android开源MVC框架_百度知道
有没有比较成熟的Android开源MVC框架
提问者采纳
AndroidMVC框架少能主流Android发并像企业及JAVA主流SSH毕竟企业JAVA WEB发Android客户端发客户端代码别进行破解反编译般经理要求框架写越复杂让别反编译越难越若家都采用统MVC框架客户端代码安全性降低
其他类似问题
为您推荐:
mvc框架的相关知识
等待您来回答
下载知道APP
随时随地咨询
出门在外也不愁Android Camera架构浅析 - 移动开发记录 - ITeye技术网站
1、Camera成像原理介绍
Camera工作流程图
Camera的成像原理可以简单概括如下:
景物(SCENE)通过镜头(LENS)生成的光学图像投射到图像传感器(Sensor)表面上,然后转为电信号,经过A/D(模数转换)转换后变为数字图像信号,再送到数字信号处理芯片(DSP)中加工处理,再通过IO接口传输到CPU中处理,通过DISPLAY就可以看到图像了。
电荷耦合器件(CCD)或互补金属氧化物半导体(CMOS)接收光学镜头传递来的影像,经模/数转换器(A/D)转换成数字信号,经过编码后存储。
流程如下:
1、CCD/CMOS将被摄体的光信号转变为电信号—电子图像(模拟信号)
2、由模/数转换器(ADC)芯片来将模拟信号转化为数字信号
3、数字信号形成后,由DSP或编码库对信号进行压缩并转化为特定的图像文件格式储存
数码相机的光学镜头与传统相机相同,将影像聚到感光器件上,即(光)电荷耦合器件(CCD) 。CCD替代了传统相机中的感光胶片的位置,其功能是将光信号转换成电信号,与电视摄像相同。
CCD是半导体器件,是数码相机的核心,其内含器件的单元数量决定了数码相机的成像质量——像素,单元越多,即像素数高,成像质量越好,通常情况下像素的高低代表了数码相机的档次和技术指标。
2、Android Camera框架
Android的Camera子系统提供一个拍照和录制视频的框架。
它将Camera的上层应用与Application Framework、用户库串接起来,而正是这个用户库来与Camera的硬件层通信,从而实现操作camera硬件。
3、Android Camera的代码结构
Android的Camera代码主要在以下的目录中:Camera的JAVA部分packages/apps/Camera/。其中Camera.java是主要实现的文件。这部分内容编译成为目标是Camera.apk
com.android.camera这个包,几个主要的类文件如下:
PhotoViewer:GalleryPicker.java(所有图片集)---&ImageGallery.java(某个Folder下图片列表)---&ViewImage.java(看某张具体图片)
VideoPlayer:GalleryPicker.java(所有视频集) ---&MovieView.java(看某一个视频)
Camera:Camera.java(Camera取景及拍照)
VideoCamera:VideoCamera.java(VideoCamera取景及摄像)
Camera的framework供上层应用调用的部分
base/core/java/android/hardware/Camera.java
这部分目标是framework.jar
Camera的JNI部分
frameworks/base/core/jni/android_hardware_Camera.cpp
这部分内容编译成为目标是libandroid_runtime.so。
Camera UI库部分
frameworks/base/libs/ui/camera
这部分的内容被编译成库libcamera_client.so。
Camera服务部分
frameworks/base/camera/libcameraservice/
这部分内容被编译成库libcameraservice.so。
Camera HAL层部分hardware/msm7k/libcamera
vendor/qcom/android-open/libcamera2
为了实现一个具体功能的Camera,在HAL层需要一个硬件相关的Camera库(例如通过调用video for linux驱动程序和Jpeg编码程序实现或者直接用各个chip厂商实现的私有库来实现,比如Qualcomm实现的libcamera.so和libqcamera.so),实现CameraHardwareInterface规定的接口,来调用相关的库,驱动相关的driver,实现对camera硬件的操作。这个库将被Camera的服务库libcameraservice.so调用。
yidongkaifa
浏览: 688433 次
废话啊啊啊
iOS: 当发生signal 9为 kill的时候,程序直接被 ...
给的地址,没豆子呢,能单独发一份给我吗,andsy2008@1 ...
[b][i][u]引用[list]
[*][flash=200 ...

我要回帖

更多关于 Android 框架 的文章

 

随机推荐