文章目錄
- 什么是CBCT
- CBCT技術路線
- 使用第三方工具
- 使用Python實現
- 使用前端實現
- 純前端實現方案優缺點
- 使用VolView實現CBCT
- VolView的使用
- 1.克隆代碼
- 2.配置依賴
- 3.運行
- 4.效果
- 進階:VolView配合Python解決卡頓
- 1.修改VtkThreeView.vue
- 2.新增Custom3DView.vue
- 3.Python生成stl三維文件
- 4.最終效果
什么是CBCT
放射科影像是醫學軟件必不可少的一部分,對影像的顯示、編輯、處理等操作更是重點。在多種放射科影像中,CBCT是關鍵的一環。CBCT全稱為口腔頜面錐形束CT,其工作原理是通過錐形X射線束圍繞患者頭部旋轉掃描,結合計算機算法生成高分辨率的三維圖像。
CBCT在口腔醫學中幾乎覆蓋所有亞專科:
?種植牙:評估頜骨密度、神經管位置,輔助種植體定位和手術導板設計。
?正畸與阻生牙:觀察牙齒排列、埋伏牙位置及與周圍組織關系,減少拔牙風險。
?牙體牙髓治療:診斷復雜根管、根裂及根尖病變,提高治療精確性。
?頜面外科:用于腫瘤、骨折的術前評估及術后效果監測。
?顳下頜關節:清晰顯示關節結構異常,輔助診斷關節紊亂病。>
CBCT技術路線
使用第三方工具
有不少工具可以實現CBCT效果,例如:Slicer。
使用Python實現
使用pydicom結合mpl_toolkits實現三維展示。
使用前端實現
使用VTK.js、Three.js及WebAssembly 實現。
方案 | 技術 | 適用場景 | 優缺點 |
---|---|---|---|
VTK.js | WebGL + VTK | 醫學影像可視化(CT/MRI) | 高質量、原生支持 Volume Rendering,但數據轉換復雜 |
Three.js + 3D 紋理 | WebGL + Shader | 一般 3D 可視化 | 兼容性好,適合前端開發,但醫學精度低 |
WebAssembly + 醫學引擎 | WASM | 專業醫學影像 | 專業級醫學軟件,性能強,但開發難度大 |
純前端實現方案優缺點
?類別 | ?優點 | ?缺點 |
---|---|---|
?性能與成本 | 1. 低服務器依賴,節省硬件和維護成本。 2. 實時交互,響應延遲低。 | 1. 瀏覽器內存限制大,可能崩潰。 2. 低端設備GPU性能不足,導致渲染卡頓。 |
?數據隱私 | 1. 數據無需上傳服務器,符合隱私法規(如HIPAA)。 2. 離線緩存支持斷網使用。 | 1. 數據預處理依賴后端,可能需臨時暴露敏感信息。 |
?功能與兼容性 | 1. 支持基礎三維操作(旋轉、縮放、剖面切割)。 | 1. 復雜算法(如深度學習分割)難以實現。 2. 瀏覽器兼容性有限(如舊版Safari)。 |
?開發與部署 | 1. 部署便捷,前端靜態資源可托管至CDN。 2. 適合輕量級應用(教育、預覽)。 | 1. 大規模數據加載耗時(如全頭顱CBCT)。 2. 需額外優化壓縮和分塊加載邏輯。 |
使用VolView實現CBCT
VolView是一款基于VTK.js的開源醫學影像瀏覽器,支持在網頁端直接拖拽加載DICOM數據并生成2D切片及3D電影級體渲染視圖,提供標注、測量等工具,所有數據均在本地處理,確保隱私安全。無需安裝軟件,可跨平臺使用,適用于臨床診斷與科研教育。
官網:https://volview.kitware.com/
關于VolView的介紹可以參考視頻:
VolView的使用
1.克隆代碼
GitHub地址:https://github.com/Kitware/VolView
2.配置依賴
npm i
3.運行
npm run dev
4.效果
注:點擊左側可以在線加載演示數據,也可以點擊右側上傳本地dicom影像文件。
進階:VolView配合Python解決卡頓
上文我們提到,基于vtk.js的純前端CBCT解決方案,雖然能不依賴其他第三方軟件的情況下顯示出我們需要的效果,但它對性能的高要求導致打開前端的電腦必須有較高的GPU配置,否則將異常卡頓。
此處給出思路:
由于卡頓主要是三維顯示導致,其代碼需實時計算得出三維效果,導致瀏覽器卡頓。要解決卡頓,我們就需要解決三維顯示問題。
我們可以將VolView的三維渲染部分替換為server端Python生成stl文件。
1.修改VtkThreeView.vue
去除原有渲染三維的組件,改為我們自定義的新組件: <Custom3DView />
。
全部代碼如下:
<template><div class="vtk-container-wrapper vtk-three-container"><div class="vtk-container" :class="active ? 'active' : ''"><!-- 此處是繪制3D重建的地方 start--><div class="vtk-sub-container"><!-- <divclass="vtk-view"ref="vtkContainerRef"data-testid="vtk-view vtk-three-view"></div> --><Custom3DView /></div><!-- 此處是繪制3D重建的地方 end --><div class="overlay-no-events tool-layer"><crop-tool :view-id="viewID" /><pan-tool :viewId="viewID" /></div><view-overlay-grid class="overlay-no-events view-annotations"><template v-slot:top-left><div class="annotation-cell"><v-btnclass="pointer-events-all"darkiconsize="medium"variant="text"@click="resetCamera"><v-icon size="medium" class="py-1">mdi-camera-flip-outline</v-icon><v-tooltiplocation="right"activator="parent"transition="slide-x-transition">Reset Camera</v-tooltip></v-btn><span class="ml-3">{{ topLeftLabel }}</span></div></template></view-overlay-grid><transition name="loading"><div v-if="isImageLoading" class="overlay-no-events loading"><div>Loading the image</div><div><v-progress-circular indeterminate color="blue" /></div></div></transition></div></div>
</template><script lang="ts">
import {computed,defineComponent,onBeforeUnmount,onMounted,PropType,provide,ref,toRefs,watch,Ref,nextTick,
} from 'vue';
import { computedWithControl } from '@vueuse/core';
import { vec3 } from 'gl-matrix';import vtkVolumeRepresentationProxy from '@kitware/vtk.js/Proxy/Representations/VolumeRepresentationProxy';
import { Mode as LookupTableProxyMode } from '@kitware/vtk.js/Proxy/Core/LookupTableProxy';
import vtkPiecewiseFunctionProxy from '@kitware/vtk.js/Proxy/Core/PiecewiseFunctionProxy';
import vtkVolumeMapper from '@kitware/vtk.js/Rendering/Core/VolumeMapper';
import vtkImageData from '@kitware/vtk.js/Common/DataModel/ImageData';
import { getDiagonalLength } from '@kitware/vtk.js/Common/DataModel/BoundingBox';
import type { Vector3 } from '@kitware/vtk.js/types';import { useProxyManager } from '@/src/composables/useProxyManager';
import ViewOverlayGrid from '@/src/components/ViewOverlayGrid.vue';
import { useResizeObserver } from '../composables/useResizeObserver';
import { useCurrentImage } from '../composables/useCurrentImage';
import { useCameraOrientation } from '../composables/useCameraOrientation';
import vtkLPSView3DProxy from '../vtk/LPSView3DProxy';
import { useSceneBuilder } from '../composables/useSceneBuilder';
import { usePersistCameraConfig } from '../composables/usePersistCameraConfig';
import { useModelStore } from '../store/datasets-models';
import { LPSAxisDir } from '../types/lps';
import { useViewProxy } from '../composables/useViewProxy';
import { ViewProxyType } from '../core/proxies';
import { VolumeColorConfig } from '../store/view-configs/types';
import useVolumeColoringStore, {DEFAULT_AMBIENT,DEFAULT_DIFFUSE,DEFAULT_SPECULAR,
} from '../store/view-configs/volume-coloring';
import { getShiftedOpacityFromPreset } from '../utils/vtk-helpers';
import CropTool from './tools/crop/CropTool.vue';
import PanTool from './tools/PanTool.vue';
import { useWidgetManager } from '../composables/useWidgetManager';
import { VTKThreeViewWidgetManager } from '../constants';
import { useCropStore, croppingPlanesEqual } from '../store/tools/crop';
import { isViewAnimating } from '../composables/isViewAnimating';
import { ColoringConfig } from '../types/views';
import useViewCameraStore from '../store/view-configs/camera';
import { Maybe } from '../types';
import { useResetViewsEvents } from './tools/ResetViews.vue';
import Custom3DView from '@/src/components/Custom3DView.vue';function useCvrEffect(config: Ref<Maybe<VolumeColorConfig>>,imageRep: Ref<vtkVolumeRepresentationProxy | null>,viewProxy: Ref<vtkLPSView3DProxy>
) {const cvrParams = computed(() => config.value?.cvr);const repMapper = computedWithControl(imageRep,() => imageRep.value?.getMapper() as vtkVolumeMapper | undefined);const image = computedWithControl(imageRep,() => imageRep.value?.getInputDataSet() as vtkImageData | null | undefined);const volume = computedWithControl(imageRep,() => imageRep.value?.getVolumes()[0]);const renderer = computed(() => viewProxy.value.getRenderer());const isAnimating = isViewAnimating(viewProxy);const cvrEnabled = computed(() => {const enabled = !!cvrParams.value?.enabled;const animating = isAnimating.value;return enabled && !animating;});const requestRender = () => {if (!isAnimating.value) {viewProxy.value.renderLater();}};// lightsconst volumeCenter = computed(() => {if (!volume.value) return null;const volumeBounds = volume.value.getBounds();return [(volumeBounds[0] + volumeBounds[1]) / 2,(volumeBounds[2] + volumeBounds[3]) / 2,(volumeBounds[4] + volumeBounds[5]) / 2,] as Vector3;});const lightFollowsCamera = computed(() => cvrParams.value?.lightFollowsCamera ?? true);watch([volumeCenter, renderer, cvrEnabled, lightFollowsCamera],([center, ren, enabled, lightFollowsCamera_]) => {if (!center) return;if (ren.getLights().length === 0) {ren.createLight();}const light = ren.getLights()[0];if (enabled) {light.setFocalPoint(...center);light.setColor(1, 1, 1);light.setIntensity(1);light.setConeAngle(90);light.setPositional(true);ren.setTwoSidedLighting(false);if (lightFollowsCamera_) {light.setLightTypeToHeadLight();ren.updateLightsGeometryToFollowCamera();} else {light.setLightTypeToSceneLight();}} else {light.setPositional(false);}requestRender();},{ immediate: true });// sampling distanceconst volumeQuality = computed(() => cvrParams.value?.volumeQuality);watch([volume, image, repMapper, volumeQuality, cvrEnabled, isAnimating],([volume_, image_, mapper, volumeQuality_, enabled, animating]) => {if (!volume_ || !mapper || volumeQuality_ == null || !image_) return;if (animating) {mapper.setSampleDistance(0.75);mapper.setMaximumSamplesPerRay(1000);mapper.setGlobalIlluminationReach(0);mapper.setComputeNormalFromOpacity(false);} else {const dims = image_.getDimensions();const spacing = image_.getSpacing();const spatialDiagonal = vec3.length(vec3.fromValues(dims[0] * spacing[0],dims[1] * spacing[1],dims[2] * spacing[2]));// Use the average spacing for sampling by defaultlet sampleDistance = spacing.reduce((a, b) => a + b) / 3.0;// Adjust the volume sampling by the quality slider valuesampleDistance /= volumeQuality_ > 1 ? 0.5 * volumeQuality_ ** 2 : 1.0;const samplesPerRay = spatialDiagonal / sampleDistance + 1;mapper.setMaximumSamplesPerRay(samplesPerRay);mapper.setSampleDistance(sampleDistance);// Adjust the global illumination reach by volume quality slidermapper.setGlobalIlluminationReach(enabled ? 0.25 * volumeQuality_ : 0);mapper.setComputeNormalFromOpacity(!enabled && volumeQuality_ > 2);}requestRender();},{ immediate: true });// volume propertiesconst ambient = computed(() => cvrParams.value?.ambient ?? 0);const diffuse = computed(() => cvrParams.value?.diffuse ?? 0);const specular = computed(() => cvrParams.value?.specular ?? 0);watch([volume, image, ambient, diffuse, specular, cvrEnabled],([volume_, image_, ambient_, diffuse_, specular_, enabled]) => {if (!volume_ || !image_) return;const property = volume_.getProperty();property.setScalarOpacityUnitDistance(0,(0.5 * getDiagonalLength(image_.getBounds())) /Math.max(...image_.getDimensions()));property.setShade(true);property.setUseGradientOpacity(0, !enabled);property.setGradientOpacityMinimumValue(0, 0.0);const dataRange = image_.getPointData().getScalars().getRange();property.setGradientOpacityMaximumValue(0,(dataRange[1] - dataRange[0]) * 0.01);property.setGradientOpacityMinimumOpacity(0, 0.0);property.setGradientOpacityMaximumOpacity(0, 1.0);// do not toggle these parameters when animatingproperty.setAmbient(enabled ? ambient_ : DEFAULT_AMBIENT);property.setDiffuse(enabled ? diffuse_ : DEFAULT_DIFFUSE);property.setSpecular(enabled ? specular_ : DEFAULT_SPECULAR);requestRender();},{ immediate: true });// volumetric scattering blendingconst useVolumetricScatteringBlending = computed(() => cvrParams.value?.useVolumetricScatteringBlending ?? false);const volumetricScatteringBlending = computed(() => cvrParams.value?.volumetricScatteringBlending ?? 0);watch([useVolumetricScatteringBlending,volumetricScatteringBlending,repMapper,cvrEnabled,],([useVsb, vsb, mapper, enabled]) => {if (!mapper) return;if (enabled && useVsb) {mapper.setVolumetricScatteringBlending(vsb);} else {mapper.setVolumetricScatteringBlending(0);}requestRender();},{ immediate: true });// local ambient occlusionconst useLocalAmbientOcclusion = computed(() => cvrParams.value?.useLocalAmbientOcclusion ?? false);const laoKernelSize = computed(() => cvrParams.value?.laoKernelSize ?? 0);const laoKernelRadius = computed(() => cvrParams.value?.laoKernelRadius ?? 0);watch([useLocalAmbientOcclusion,laoKernelSize,laoKernelRadius,repMapper,cvrEnabled,],([useLao, kernelSize, kernelRadius, mapper, enabled]) => {if (!mapper) return;if (enabled && useLao) {mapper.setLocalAmbientOcclusion(true);mapper.setLAOKernelSize(kernelSize);mapper.setLAOKernelRadius(kernelRadius);} else {mapper.setLocalAmbientOcclusion(false);mapper.setLAOKernelSize(0);mapper.setLAOKernelRadius(0);}requestRender();},{ immediate: true });
}function useColoringEffect(config: Ref<Maybe<ColoringConfig>>,imageRep: Ref<vtkVolumeRepresentationProxy | null>,viewProxy: Ref<vtkLPSView3DProxy>
) {const colorBy = computed(() => config.value?.colorBy);const colorTransferFunction = computed(() => config.value?.transferFunction);const opacityFunction = computed(() => config.value?.opacityFunction);const proxyManager = useProxyManager();watch([imageRep, colorBy, colorTransferFunction, opacityFunction],([rep, colorBy_, colorFunc, opacityFunc]) => {if (!rep || !colorBy_ || !colorFunc || !opacityFunc || !proxyManager) {return;}const { arrayName, location } = colorBy_;const lut = proxyManager.getLookupTable(arrayName);lut.setMode(LookupTableProxyMode.Preset);lut.setPresetName(colorFunc.preset);lut.setDataRange(...colorFunc.mappingRange);const pwf = proxyManager.getPiecewiseFunction(arrayName);pwf.setMode(opacityFunc.mode);pwf.setDataRange(...opacityFunc.mappingRange);switch (opacityFunc.mode) {case vtkPiecewiseFunctionProxy.Mode.Gaussians:pwf.setGaussians(opacityFunc.gaussians);break;case vtkPiecewiseFunctionProxy.Mode.Points: {const opacityPoints = getShiftedOpacityFromPreset(opacityFunc.preset,opacityFunc.mappingRange,opacityFunc.shift,opacityFunc.shiftAlpha);if (opacityPoints) {pwf.setPoints(opacityPoints);}break;}case vtkPiecewiseFunctionProxy.Mode.Nodes:pwf.setNodes(opacityFunc.nodes);break;default:}if (rep) {// control color range manuallyrep.setRescaleOnColorBy(false);rep.setColorBy(arrayName, location);}// Need to trigger a render for when we are restoring from a state fileviewProxy.value.renderLater();},{ immediate: true });
}export default defineComponent({props: {id: {type: String,required: true,},viewDirection: {type: String as PropType<LPSAxisDir>,required: true,},viewUp: {type: String as PropType<LPSAxisDir>,required: true,},},components: {ViewOverlayGrid,CropTool,PanTool,Custom3DView,},setup(props) {const modelStore = useModelStore();const volumeColoringStore = useVolumeColoringStore();const viewCameraStore = useViewCameraStore();const { id: viewID, viewDirection, viewUp } = toRefs(props);const vtkContainerRef = ref<HTMLElement>();// --- computed vars --- //const {currentImageID: curImageID,currentImageMetadata: curImageMetadata,currentImageData,isImageLoading,} = useCurrentImage();// --- view proxy setup --- //const { viewProxy, setContainer: setViewProxyContainer } =useViewProxy<vtkLPSView3DProxy>(viewID, ViewProxyType.Volume);onMounted(() => {viewProxy.value.setOrientationAxesVisibility(true);viewProxy.value.setOrientationAxesType('cube');viewProxy.value.setBackground([0, 0, 0, 0]);setViewProxyContainer(vtkContainerRef.value);});onBeforeUnmount(() => {setViewProxyContainer(null);viewProxy.value.setContainer(null);});useResizeObserver(vtkContainerRef, () => viewProxy.value.resize());// --- scene setup --- //const { baseImageRep } = useSceneBuilder<vtkVolumeRepresentationProxy>(viewID,{baseImage: curImageID,models: computed(() => modelStore.idList),});// --- picking --- //// disables picking for crop control and morewatch(baseImageRep,(rep) => {if (rep) {rep.getVolumes().forEach((volume) => volume.setPickable(false));}},{ immediate: true });// --- widget manager --- //const { widgetManager } = useWidgetManager(viewProxy);provide(VTKThreeViewWidgetManager, widgetManager);// --- camera setup --- //const { cameraUpVec, cameraDirVec } = useCameraOrientation(viewDirection,viewUp,curImageMetadata);const resetCamera = () => {const bounds = curImageMetadata.value.worldBounds;const center = [(bounds[0] + bounds[1]) / 2,(bounds[2] + bounds[3]) / 2,(bounds[4] + bounds[5]) / 2,] as vec3;viewProxy.value.updateCamera(cameraDirVec.value,cameraUpVec.value,center);viewProxy.value.resetCamera();viewProxy.value.renderLater();};watch([baseImageRep, cameraDirVec, cameraUpVec],() => {const cameraConfig = viewCameraStore.getConfig(viewID.value,curImageID.value);// We don't want to reset the camera if we have a config we are restoringif (!cameraConfig) {// nextTick ensures resetCamera gets called after// useSceneBuilder refreshes the scene.nextTick(resetCamera);}},{immediate: true,});const { restoreCameraConfig } = usePersistCameraConfig(viewID,curImageID,viewProxy,'position','focalPoint','directionOfProjection','viewUp');watch(curImageID, () => {// See if we have a camera configuration to restoreconst cameraConfig = viewCameraStore.getConfig(viewID.value,curImageID.value);if (cameraConfig) {restoreCameraConfig(cameraConfig);viewProxy.value.getRenderer().resetCameraClippingRange();viewProxy.value.renderLater();}});// --- coloring setup --- //const volumeColorConfig = computed(() =>volumeColoringStore.getConfig(viewID.value, curImageID.value));watch([viewID, curImageID],() => {if (curImageID.value &¤tImageData.value &&!volumeColorConfig.value) {volumeColoringStore.resetToDefaultColoring(viewID.value,curImageID.value,currentImageData.value);}},{ immediate: true });// --- CVR parameters --- //useCvrEffect(volumeColorConfig, baseImageRep, viewProxy);// --- coloring --- //useColoringEffect(volumeColorConfig, baseImageRep, viewProxy);// --- cropping planes --- //const cropStore = useCropStore();const croppingPlanes = cropStore.getComputedVTKPlanes(curImageID);watch(croppingPlanes,(planes, oldPlanes) => {const mapper = baseImageRep.value?.getMapper();if (!mapper ||!planes ||(oldPlanes && croppingPlanesEqual(planes, oldPlanes)))return;mapper.removeAllClippingPlanes();planes.forEach((plane) => mapper.addClippingPlane(plane));mapper.modified();viewProxy.value.renderLater();},{ immediate: true });// --- Listen to ResetViews event --- //const events = useResetViewsEvents();events.onClick(() => resetCamera());// --- template vars --- //return {vtkContainerRef,viewID,active: false,topLeftLabel: computed(() =>volumeColorConfig.value?.transferFunction.preset.replace(/-/g, ' ') ??''),isImageLoading,resetCamera,};},
});
</script><style scoped>
.model-container {width: 100%;height: 600px;position: relative;
}
</style><style scoped src="@/src/components/styles/vtk-view.css"></style>
<style scoped src="@/src/components/styles/utils.css"></style><style scoped>
.vtk-three-container {background-color: black;grid-template-columns: auto;
}
</style>
2.新增Custom3DView.vue
在src/components
目錄下新增Custom3DView.vue
。用來顯示后端Python生成的stl。
全部代碼如下:
<template><div ref="container" class="model-container"></div>
</template><script>
import * as THREE from 'three';
import { OrbitControls } from 'three/examples/jsm/controls/OrbitControls';
import { STLLoader } from 'three/examples/jsm/loaders/STLLoader';
import { toRaw } from 'vue';export default {data() {return {loadingProgress: 0,loadError: null,animateId: null};},mounted() {this.initThreeContext();this.loadSTLModel();this.setupAnimation();},beforeDestroy() {this.cleanupResources();},methods: {initThreeContext() {const container = this.$refs.container;// 場景配置this._scene = new THREE.Scene();this._scene.background = new THREE.Color(0x000000);// 相機配置this._camera = new THREE.PerspectiveCamera(45, // 縮小視角增加近景效果container.clientWidth / container.clientHeight,0.1,500 // 縮小可視范圍提升渲染性能); this._camera.position.set(30, 30, 30); // 初始位置更靠近模型// 渲染器配置(網頁7的黑色背景方案)this._renderer = new THREE.WebGLRenderer({ antialias: true,alpha: true // 保留alpha通道以備后續擴展});this._renderer.setClearColor(0x000000, 1); // 雙重確保背景顏色this._renderer.setSize(container.clientWidth, container.clientHeight);container.appendChild(this._renderer.domElement);// 光源優化const ambientLight = new THREE.AmbientLight(0x404040);const directionalLight = new THREE.DirectionalLight(0xffffff, 0.8);directionalLight.position.set(15, 15, 15);this._scene.add(ambientLight, directionalLight);// 控制器配置this._controls = new OrbitControls(toRaw(this._camera), this._renderer.domElement);this._controls.enableDamping = true;this._controls.dampingFactor = 0.05;},loadSTLModel() {const objSTLLoader=new STLLoader()objSTLLoader.crossOrigin='Anonymous'objSTLLoader.load( 'https://stl所在路徑.stl', geometry => {// 添加模型前清空舊模型this.clearExistingModel();// 材質配置(淺灰色方案)const material = new THREE.MeshPhongMaterial({color: 0xcccccc, // 淺灰色specular: 0x222222, shininess: 150, side: THREE.DoubleSide});const mesh = new THREE.Mesh(geometry, material);geometry.center();mesh.scale.set(0.1, 0.1, 0.1);// 自動聚焦模型const box = new THREE.Box3().setFromObject(mesh);const center = box.getCenter(new THREE.Vector3());toRaw(this._camera).lookAt(center);toRaw(this._scene).add(mesh); },progress => {this.loadingProgress = (progress.loaded / progress.total) * 100},error => {this.loadError = '模型加載失敗,請檢查網絡或文件路徑'});},setupAnimation() {const animate = () => {this.animateId = requestAnimationFrame(animate);toRaw(this._controls).update();this._renderer.render(toRaw(this._scene), toRaw(this._camera));};animate();},cleanupResources() {cancelAnimationFrame(this.animateId);toRaw(this._controls).dispose();this._renderer.dispose();toRaw(this._scene).traverse(obj => {if (obj.isMesh) {obj.geometry.dispose();obj.material.dispose();}});}}
};
</script><style scoped>
.model-container {width: 100%;height: 600px;position: relative;background: #000; /* 備用黑色背景 */
}
</style>
3.Python生成stl三維文件
在服務端用Python生成stl:
from pydicom import dcmread
import pylibjpegimport numpy as np
import pydicom
import pydicom.pixel_data_handlers.gdcm_handler as gdcm_handlerimport os
import matplotlib.pyplot as plt
from glob import glob
from mpl_toolkits.mplot3d.art3d import Poly3DCollection
import scipy.ndimage
from skimage import measure
from mpl_toolkits import mplot3d
from stl import mesh
import trimesh
pydicom.config.image_handlers = [None, gdcm_handler]
pydicom.config.image_handlers = ['gdcm_handler']def load_scan(path):slices = []# count = 0for s in os.listdir(path):ds = pydicom.dcmread(path + '/' + s, force=True)ds.PhotometricInterpretation = 'YBR_FULL'if s != '.DS_Store': # This is for AttributeError: 'FileDataset' object has no attribute 'InstanceNumber'slices.append(ds)slices.sort(key=lambda x: int(x.InstanceNumber))try:slice_thickness = np.abs(slices[0].ImagePositionPatient[2] - slices[1].ImagePositionPatient[2])except:slice_thickness = np.abs(slices[0].SliceLocation - slices[1].SliceLocation)for s in slices:s.SliceThickness = slice_thicknessreturn slicesdef get_pixels_hu(scans):image = np.stack([s.pixel_array for s in scans])image = image.astype(np.int16)image[image == -2000] = 0# Convert to Hounsfield units (HU)intercept = scans[0].RescaleInterceptslope = scans[0].RescaleSlopeif slope != 1:image = slope * image.astype(np.float64)image = image.astype(np.int16)image += np.int16(intercept)return np.array(image, dtype=np.int16)def make_mesh(image, threshold=-300, step_size=1):print("Transposing surface")p = image.transpose(2, 1, 0)print("Calculating surface")verts, faces, norm, val = measure.marching_cubes(p, threshold, step_size=step_size, allow_degenerate=True)return verts, facesdef resample(image, scan, new_spacing=[1, 1, 1]):# Determine current pixel spacing, change this function to get better resultspacing = [float(scan[0].SliceThickness)] + [float(i) for i in scan[0].PixelSpacing]spacing = np.array(spacing)resize_factor = [spacing[0] / new_spacing[0], spacing[1] / new_spacing[1], spacing[2] / new_spacing[2]]new_real_shape = np.multiply(image.shape, resize_factor)new_shape = np.round(new_real_shape)real_resize_factor = new_shape / image.shapenew_spacing = spacing / real_resize_factorimage = scipy.ndimage.interpolation.zoom(image, real_resize_factor)return image, new_spacingif __name__ == "__main__":from matplotlib.cm import get_cmapimport matplotlib.colors as mcolorsdata_path = "/mnt/data_18T/data/口腔/CBCT及三維重建/dicom"output_path = "/mnt/data_18T/data/口腔/CBCT及三維重建/stl_path/"if not os.path.exists(output_path): # create the output pathos.mkdir(output_path)patient = load_scan(data_path)images = get_pixels_hu(patient)imgs_after_resamp, spacing = resample(images.astype(np.float64), patient, [1, 0.5, 1])v, f = make_mesh(imgs_after_resamp, 350, 1)# save the stl filevertices = vfaces = f# 創建顏色列表colors = get_cmap('Greens')(np.linspace(0, 1, len(vertices)))colors = mcolors.to_rgba_array(colors)mesh = trimesh.Trimesh(vertices=vertices, faces=faces)mesh.export(output_path + 'cube2.stl', file_type="stl")
4.最終效果
注:我的是集顯,配置不算高,在使用stl顯示三維的情況下,很流暢。