scalelsd 筆記 線段識別 本地部署 模型架構

ant-research/scalelsd | DeepWiki

https://arxiv.org/html/2506.09369?_immersive_translate_auto_translate=1

https://gitee.com/njsgcs/scalelsd

https://github.com/ant-research/scalelsd

https://huggingface.co/cherubicxn/scalelsd

模型鏈接:?

https://cdn-lfs-us-1.hf.co/repos/4b/b7/4bb773266cdc6bd3898cae785776f1dd52f835c2d4fcdf132ca4c487ecac7942/d0a7333c4de518a70af7d6ea4a283aa0ce9ffe1047525373b94070b1d1d4003f?response-content-disposition=attachment%3B+filename*%3DUTF-8%27%27scalelsd-vitbase-v2-train-sa1b.pt%3B+filename%3D%22scalelsd-vitbase-v2-train-sa1b.pt%22%3B&Expires=1752922960&Policy=eyJTdGF0ZW1lbnQiOlt7IkNvbmRpdGlvbiI6eyJEYXRlTGVzc1RoYW4iOnsiQVdTOkVwb2NoVGltZSI6MTc1MjkyMjk2MH19LCJSZXNvdXJjZSI6Imh0dHBzOi8vY2RuLWxmcy11cy0xLmhmLmNvL3JlcG9zLzRiL2I3LzRiYjc3MzI2NmNkYzZiZDM4OThjYWU3ODU3NzZmMWRkNTJmODM1YzJkNGZjZGYxMzJjYTRjNDg3ZWNhYzc5NDIvZDBhNzMzM2M0ZGU1MThhNzBhZjdkNmVhNGEyODNhYTBjZTlmZmUxMDQ3NTI1MzczYjk0MDcwYjFkMWQ0MDAzZj9yZXNwb25zZS1jb250ZW50LWRpc3Bvc2l0aW9uPSoifV19&Signature=fCSnqgEbqmYV7X3VhNo-uEz-ZO6gRvRUqtS4TzrEO2iSJ6U0LU%7EbrVwWwhc69FveoaXgoq7ikmJRWv7MZxomAF5qxPxQIr3Nhor1KyDeEp267ikFN6ASBAA8IcJ67unIWzT4d3CTrbNVh8wT0pn1JsL4gGK-Xl2A0x7%7ErVb9exw-8h7AUGDejkvg5yFbzrRIw6U4QqyuwOFpicMlZXNaPBPnjfyKuQmfdz4UzbZ4C5WeKKoUAa6sxsMTu-GEF4wv%7ETBbjQS1ev5ZQ2%7EYaZxZnICvEwcwdNxSpJPiZuJrrZq95xSOcTaTFORO-1wDT7zXJY9pdFPofzIV2CBv1i6AAQ__&Key-Pair-Id=K24J24Z295AEI9


https://cdn-lfs-us-1.hf.co/repos/4b/b7/4bb773266cdc6bd3898cae785776f1dd52f835c2d4fcdf132ca4c487ecac7942/a7eaabbfc5e79f8fc407251f923f791b39c739042eb585ca52bb85abe64c4ee7?response-content-disposition=attachment%3B+filename*%3DUTF-8%27%27scalelsd-vitbase-v1-train-sa1b.pt%3B+filename%3D%22scalelsd-vitbase-v1-train-sa1b.pt%22%3B&Expires=1752923360&Policy=eyJTdGF0ZW1lbnQiOlt7IkNvbmRpdGlvbiI6eyJEYXRlTGVzc1RoYW4iOnsiQVdTOkVwb2NoVGltZSI6MTc1MjkyMzM2MH19LCJSZXNvdXJjZSI6Imh0dHBzOi8vY2RuLWxmcy11cy0xLmhmLmNvL3JlcG9zLzRiL2I3LzRiYjc3MzI2NmNkYzZiZDM4OThjYWU3ODU3NzZmMWRkNTJmODM1YzJkNGZjZGYxMzJjYTRjNDg3ZWNhYzc5NDIvYTdlYWFiYmZjNWU3OWY4ZmM0MDcyNTFmOTIzZjc5MWIzOWM3MzkwNDJlYjU4NWNhNTJiYjg1YWJlNjRjNGVlNz9yZXNwb25zZS1jb250ZW50LWRpc3Bvc2l0aW9uPSoifV19&Signature=rtwQzCIgS2POke1hHELX2G7I-9MbRqzH51BWuHemwviaTAHyp1HaxOf3e%7EX1bGOld1k57ZR%7EHzeWh6D1PRLpaO-aG%7EgyhmCydrSIgkUzHd9VB2vGdGkoky8l9vP3HpUUkt7ERXCLInynopDd5191g3diZ0WyYy05d69Qel1ld-vhWlBOFrp8BzY%7EzLWnfITlCu0PGSlexlM86P9k0HaWor3vwqUYFJ0b0FVJ7DSrbRKIayP-S4M1fR4amT17tCXNDn3Jb-B2z4Eu1un-YpP67GeiItMuS0RtB4qkjHbJpVFKIQSvP0i08qb0HQuxiiN%7Ez%7EepWWWW1XxraAaWhbFcWA__&Key-Pair-Id=K24J24Z295AEI9

python predictor/predict.py -i Drawing_Annotation_Recognition8.v1i.yolov12/test/images/Snipaste_2025-07-12_19-54-54_png.rf.a562ea098219605eff9cb4ce58f09d0e.jpg

?

?

ptsd了看到這個界面就想到ai畫圖

?

比霍爾識別好,知道這里分兩段,但是無中生有了線也缺線了,

????????

?

有encoder

?

?

torchinfo看這個模型會報錯好像是動態模型,模型里帶if結構

?

 with open("model_structure.txt", "w") as f:f.write(str(model))
ScaleLSD((backbone): DPTFieldModel((pretrained): Module((model): VisionTransformer((patch_embed): HybridEmbed((backbone): ResNetV2((stem): Sequential((conv): StdConv2dSame(3, 64, kernel_size=(7, 7), stride=(2, 2), bias=False)(norm): GroupNormAct(32, 64, eps=1e-05, affine=True(drop): Identity()(act): ReLU(inplace=True))(pool): MaxPool2dSame(kernel_size=(3, 3), stride=(2, 2), padding=(0, 0), dilation=(1, 1), ceil_mode=False))(stages): Sequential((0): ResNetStage((blocks): Sequential((0): Bottleneck((downsample): DownsampleConv((conv): StdConv2dSame(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)(norm): GroupNormAct(32, 256, eps=1e-05, affine=True(drop): Identity()(act): Identity()))(conv1): StdConv2dSame(64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)(norm1): GroupNormAct(32, 64, eps=1e-05, affine=True(drop): Identity()(act): ReLU(inplace=True))(conv2): StdConv2dSame(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(norm2): GroupNormAct(32, 64, eps=1e-05, affine=True(drop): Identity()(act): ReLU(inplace=True))(conv3): StdConv2dSame(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)(norm3): GroupNormAct(32, 256, eps=1e-05, affine=True(drop): Identity()(act): Identity())(drop_path): Identity()(act3): ReLU(inplace=True))(1): Bottleneck((conv1): StdConv2dSame(256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)(norm1): GroupNormAct(32, 64, eps=1e-05, affine=True(drop): Identity()(act): ReLU(inplace=True))(conv2): StdConv2dSame(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(norm2): GroupNormAct(32, 64, eps=1e-05, affine=True(drop): Identity()(act): ReLU(inplace=True))(conv3): StdConv2dSame(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)(norm3): GroupNormAct(32, 256, eps=1e-05, affine=True(drop): Identity()(act): Identity())(drop_path): Identity()(act3): ReLU(inplace=True))(2): Bottleneck((conv1): StdConv2dSame(256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)(norm1): GroupNormAct(32, 64, eps=1e-05, affine=True(drop): Identity()(act): ReLU(inplace=True))(conv2): StdConv2dSame(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(norm2): GroupNormAct(32, 64, eps=1e-05, affine=True(drop): Identity()(act): ReLU(inplace=True))(conv3): StdConv2dSame(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)(norm3): GroupNormAct(32, 256, eps=1e-05, affine=True(drop): Identity()(act): Identity())(drop_path): Identity()(act3): ReLU(inplace=True))))(1): ResNetStage((blocks): Sequential((0): Bottleneck((downsample): DownsampleConv((conv): StdConv2dSame(256, 512, kernel_size=(1, 1), stride=(2, 2), bias=False)(norm): GroupNormAct(32, 512, eps=1e-05, affine=True(drop): Identity()(act): Identity()))(conv1): StdConv2dSame(256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)(norm1): GroupNormAct(32, 128, eps=1e-05, affine=True(drop): Identity()(act): ReLU(inplace=True))(conv2): StdConv2dSame(128, 128, kernel_size=(3, 3), stride=(2, 2), bias=False)(norm2): GroupNormAct(32, 128, eps=1e-05, affine=True(drop): Identity()(act): ReLU(inplace=True))(conv3): StdConv2dSame(128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)(norm3): GroupNormAct(32, 512, eps=1e-05, affine=True(drop): Identity()(act): Identity())(drop_path): Identity()(act3): ReLU(inplace=True))(1): Bottleneck((conv1): StdConv2dSame(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)(norm1): GroupNormAct(32, 128, eps=1e-05, affine=True(drop): Identity()(act): ReLU(inplace=True))(conv2): StdConv2dSame(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(norm2): GroupNormAct(32, 128, eps=1e-05, affine=True(drop): Identity()(act): ReLU(inplace=True))(conv3): StdConv2dSame(128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)(norm3): GroupNormAct(32, 512, eps=1e-05, affine=True(drop): Identity()(act): Identity())(drop_path): Identity()(act3): ReLU(inplace=True))(2): Bottleneck((conv1): StdConv2dSame(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)(norm1): GroupNormAct(32, 128, eps=1e-05, affine=True(drop): Identity()(act): ReLU(inplace=True))(conv2): StdConv2dSame(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(norm2): GroupNormAct(32, 128, eps=1e-05, affine=True(drop): Identity()(act): ReLU(inplace=True))(conv3): StdConv2dSame(128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)(norm3): GroupNormAct(32, 512, eps=1e-05, affine=True(drop): Identity()(act): Identity())(drop_path): Identity()(act3): ReLU(inplace=True))(3): Bottleneck((conv1): StdConv2dSame(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)(norm1): GroupNormAct(32, 128, eps=1e-05, affine=True(drop): Identity()(act): ReLU(inplace=True))(conv2): StdConv2dSame(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(norm2): GroupNormAct(32, 128, eps=1e-05, affine=True(drop): Identity()(act): ReLU(inplace=True))(conv3): StdConv2dSame(128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)(norm3): GroupNormAct(32, 512, eps=1e-05, affine=True(drop): Identity()(act): Identity())(drop_path): Identity()(act3): ReLU(inplace=True))))(2): ResNetStage((blocks): Sequential((0): Bottleneck((downsample): DownsampleConv((conv): StdConv2dSame(512, 1024, kernel_size=(1, 1), stride=(2, 2), bias=False)(norm): GroupNormAct(32, 1024, eps=1e-05, affine=True(drop): Identity()(act): Identity()))(conv1): StdConv2dSame(512, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)(norm1): GroupNormAct(32, 256, eps=1e-05, affine=True(drop): Identity()(act): ReLU(inplace=True))(conv2): StdConv2dSame(256, 256, kernel_size=(3, 3), stride=(2, 2), bias=False)(norm2): GroupNormAct(32, 256, eps=1e-05, affine=True(drop): Identity()(act): ReLU(inplace=True))(conv3): StdConv2dSame(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)(norm3): GroupNormAct(32, 1024, eps=1e-05, affine=True(drop): Identity()(act): Identity())(drop_path): Identity()(act3): ReLU(inplace=True))(1): Bottleneck((conv1): StdConv2dSame(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)(norm1): GroupNormAct(32, 256, eps=1e-05, affine=True(drop): Identity()(act): ReLU(inplace=True))(conv2): StdConv2dSame(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(norm2): GroupNormAct(32, 256, eps=1e-05, affine=True(drop): Identity()(act): ReLU(inplace=True))(conv3): StdConv2dSame(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)(norm3): GroupNormAct(32, 1024, eps=1e-05, affine=True(drop): Identity()(act): Identity())(drop_path): Identity()(act3): ReLU(inplace=True))(2): Bottleneck((conv1): StdConv2dSame(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)(norm1): GroupNormAct(32, 256, eps=1e-05, affine=True(drop): Identity()(act): ReLU(inplace=True))(conv2): StdConv2dSame(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(norm2): GroupNormAct(32, 256, eps=1e-05, affine=True(drop): Identity()(act): ReLU(inplace=True))(conv3): StdConv2dSame(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)(norm3): GroupNormAct(32, 1024, eps=1e-05, affine=True(drop): Identity()(act): Identity())(drop_path): Identity()(act3): ReLU(inplace=True))(3): Bottleneck((conv1): StdConv2dSame(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)(norm1): GroupNormAct(32, 256, eps=1e-05, affine=True(drop): Identity()(act): ReLU(inplace=True))(conv2): StdConv2dSame(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(norm2): GroupNormAct(32, 256, eps=1e-05, affine=True(drop): Identity()(act): ReLU(inplace=True))(conv3): StdConv2dSame(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)(norm3): GroupNormAct(32, 1024, eps=1e-05, affine=True(drop): Identity()(act): Identity())(drop_path): Identity()(act3): ReLU(inplace=True))(4): Bottleneck((conv1): StdConv2dSame(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)(norm1): GroupNormAct(32, 256, eps=1e-05, affine=True(drop): Identity()(act): ReLU(inplace=True))(conv2): StdConv2dSame(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(norm2): GroupNormAct(32, 256, eps=1e-05, affine=True(drop): Identity()(act): ReLU(inplace=True))(conv3): StdConv2dSame(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)(norm3): GroupNormAct(32, 1024, eps=1e-05, affine=True(drop): Identity()(act): Identity())(drop_path): Identity()(act3): ReLU(inplace=True))(5): Bottleneck((conv1): StdConv2dSame(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)(norm1): GroupNormAct(32, 256, eps=1e-05, affine=True(drop): Identity()(act): ReLU(inplace=True))(conv2): StdConv2dSame(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(norm2): GroupNormAct(32, 256, eps=1e-05, affine=True(drop): Identity()(act): ReLU(inplace=True))(conv3): StdConv2dSame(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)(norm3): GroupNormAct(32, 1024, eps=1e-05, affine=True(drop): Identity()(act): Identity())(drop_path): Identity()(act3): ReLU(inplace=True))(6): Bottleneck((conv1): StdConv2dSame(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)(norm1): GroupNormAct(32, 256, eps=1e-05, affine=True(drop): Identity()(act): ReLU(inplace=True))(conv2): StdConv2dSame(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(norm2): GroupNormAct(32, 256, eps=1e-05, affine=True(drop): Identity()(act): ReLU(inplace=True))(conv3): StdConv2dSame(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)(norm3): GroupNormAct(32, 1024, eps=1e-05, affine=True(drop): Identity()(act): Identity())(drop_path): Identity()(act3): ReLU(inplace=True))(7): Bottleneck((conv1): StdConv2dSame(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)(norm1): GroupNormAct(32, 256, eps=1e-05, affine=True(drop): Identity()(act): ReLU(inplace=True))(conv2): StdConv2dSame(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(norm2): GroupNormAct(32, 256, eps=1e-05, affine=True(drop): Identity()(act): ReLU(inplace=True))(conv3): StdConv2dSame(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)(norm3): GroupNormAct(32, 1024, eps=1e-05, affine=True(drop): Identity()(act): Identity())(drop_path): Identity()(act3): ReLU(inplace=True))(8): Bottleneck((conv1): StdConv2dSame(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)(norm1): GroupNormAct(32, 256, eps=1e-05, affine=True(drop): Identity()(act): ReLU(inplace=True))(conv2): StdConv2dSame(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(norm2): GroupNormAct(32, 256, eps=1e-05, affine=True(drop): Identity()(act): ReLU(inplace=True))(conv3): StdConv2dSame(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)(norm3): GroupNormAct(32, 1024, eps=1e-05, affine=True(drop): Identity()(act): Identity())(drop_path): Identity()(act3): ReLU(inplace=True)))))(norm): Identity()(head): ClassifierHead((global_pool): SelectAdaptivePool2d(pool_type=, flatten=Identity())(drop): Dropout(p=0.0, inplace=False)(fc): Identity()(flatten): Identity()))(proj): Conv2d(1024, 768, kernel_size=(1, 1), stride=(1, 1)))(pos_drop): Dropout(p=0.0, inplace=False)(patch_drop): Identity()(norm_pre): Identity()(blocks): Sequential((0): Block((norm1): LayerNorm((768,), eps=1e-06, elementwise_affine=True)(attn): Attention((qkv): Linear(in_features=768, out_features=2304, bias=True)(q_norm): Identity()(k_norm): Identity()(attn_drop): Dropout(p=0.0, inplace=False)(norm): Identity()(proj): Linear(in_features=768, out_features=768, bias=True)(proj_drop): Dropout(p=0.0, inplace=False))(ls1): Identity()(drop_path1): Identity()(norm2): LayerNorm((768,), eps=1e-06, elementwise_affine=True)(mlp): Mlp((fc1): Linear(in_features=768, out_features=3072, bias=True)(act): GELU(approximate='none')(drop1): Dropout(p=0.0, inplace=False)(norm): Identity()(fc2): Linear(in_features=3072, out_features=768, bias=True)(drop2): Dropout(p=0.0, inplace=False))(ls2): Identity()(drop_path2): Identity())(1): Block((norm1): LayerNorm((768,), eps=1e-06, elementwise_affine=True)(attn): Attention((qkv): Linear(in_features=768, out_features=2304, bias=True)(q_norm): Identity()(k_norm): Identity()(attn_drop): Dropout(p=0.0, inplace=False)(norm): Identity()(proj): Linear(in_features=768, out_features=768, bias=True)(proj_drop): Dropout(p=0.0, inplace=False))(ls1): Identity()(drop_path1): Identity()(norm2): LayerNorm((768,), eps=1e-06, elementwise_affine=True)(mlp): Mlp((fc1): Linear(in_features=768, out_features=3072, bias=True)(act): GELU(approximate='none')(drop1): Dropout(p=0.0, inplace=False)(norm): Identity()(fc2): Linear(in_features=3072, out_features=768, bias=True)(drop2): Dropout(p=0.0, inplace=False))(ls2): Identity()(drop_path2): Identity())(2): Block((norm1): LayerNorm((768,), eps=1e-06, elementwise_affine=True)(attn): Attention((qkv): Linear(in_features=768, out_features=2304, bias=True)(q_norm): Identity()(k_norm): Identity()(attn_drop): Dropout(p=0.0, inplace=False)(norm): Identity()(proj): Linear(in_features=768, out_features=768, bias=True)(proj_drop): Dropout(p=0.0, inplace=False))(ls1): Identity()(drop_path1): Identity()(norm2): LayerNorm((768,), eps=1e-06, elementwise_affine=True)(mlp): Mlp((fc1): Linear(in_features=768, out_features=3072, bias=True)(act): GELU(approximate='none')(drop1): Dropout(p=0.0, inplace=False)(norm): Identity()(fc2): Linear(in_features=3072, out_features=768, bias=True)(drop2): Dropout(p=0.0, inplace=False))(ls2): Identity()(drop_path2): Identity())(3): Block((norm1): LayerNorm((768,), eps=1e-06, elementwise_affine=True)(attn): Attention((qkv): Linear(in_features=768, out_features=2304, bias=True)(q_norm): Identity()(k_norm): Identity()(attn_drop): Dropout(p=0.0, inplace=False)(norm): Identity()(proj): Linear(in_features=768, out_features=768, bias=True)(proj_drop): Dropout(p=0.0, inplace=False))(ls1): Identity()(drop_path1): Identity()(norm2): LayerNorm((768,), eps=1e-06, elementwise_affine=True)(mlp): Mlp((fc1): Linear(in_features=768, out_features=3072, bias=True)(act): GELU(approximate='none')(drop1): Dropout(p=0.0, inplace=False)(norm): Identity()(fc2): Linear(in_features=3072, out_features=768, bias=True)(drop2): Dropout(p=0.0, inplace=False))(ls2): Identity()(drop_path2): Identity())(4): Block((norm1): LayerNorm((768,), eps=1e-06, elementwise_affine=True)(attn): Attention((qkv): Linear(in_features=768, out_features=2304, bias=True)(q_norm): Identity()(k_norm): Identity()(attn_drop): Dropout(p=0.0, inplace=False)(norm): Identity()(proj): Linear(in_features=768, out_features=768, bias=True)(proj_drop): Dropout(p=0.0, inplace=False))(ls1): Identity()(drop_path1): Identity()(norm2): LayerNorm((768,), eps=1e-06, elementwise_affine=True)(mlp): Mlp((fc1): Linear(in_features=768, out_features=3072, bias=True)(act): GELU(approximate='none')(drop1): Dropout(p=0.0, inplace=False)(norm): Identity()(fc2): Linear(in_features=3072, out_features=768, bias=True)(drop2): Dropout(p=0.0, inplace=False))(ls2): Identity()(drop_path2): Identity())(5): Block((norm1): LayerNorm((768,), eps=1e-06, elementwise_affine=True)(attn): Attention((qkv): Linear(in_features=768, out_features=2304, bias=True)(q_norm): Identity()(k_norm): Identity()(attn_drop): Dropout(p=0.0, inplace=False)(norm): Identity()(proj): Linear(in_features=768, out_features=768, bias=True)(proj_drop): Dropout(p=0.0, inplace=False))(ls1): Identity()(drop_path1): Identity()(norm2): LayerNorm((768,), eps=1e-06, elementwise_affine=True)(mlp): Mlp((fc1): Linear(in_features=768, out_features=3072, bias=True)(act): GELU(approximate='none')(drop1): Dropout(p=0.0, inplace=False)(norm): Identity()(fc2): Linear(in_features=3072, out_features=768, bias=True)(drop2): Dropout(p=0.0, inplace=False))(ls2): Identity()(drop_path2): Identity())(6): Block((norm1): LayerNorm((768,), eps=1e-06, elementwise_affine=True)(attn): Attention((qkv): Linear(in_features=768, out_features=2304, bias=True)(q_norm): Identity()(k_norm): Identity()(attn_drop): Dropout(p=0.0, inplace=False)(norm): Identity()(proj): Linear(in_features=768, out_features=768, bias=True)(proj_drop): Dropout(p=0.0, inplace=False))(ls1): Identity()(drop_path1): Identity()(norm2): LayerNorm((768,), eps=1e-06, elementwise_affine=True)(mlp): Mlp((fc1): Linear(in_features=768, out_features=3072, bias=True)(act): GELU(approximate='none')(drop1): Dropout(p=0.0, inplace=False)(norm): Identity()(fc2): Linear(in_features=3072, out_features=768, bias=True)(drop2): Dropout(p=0.0, inplace=False))(ls2): Identity()(drop_path2): Identity())(7): Block((norm1): LayerNorm((768,), eps=1e-06, elementwise_affine=True)(attn): Attention((qkv): Linear(in_features=768, out_features=2304, bias=True)(q_norm): Identity()(k_norm): Identity()(attn_drop): Dropout(p=0.0, inplace=False)(norm): Identity()(proj): Linear(in_features=768, out_features=768, bias=True)(proj_drop): Dropout(p=0.0, inplace=False))(ls1): Identity()(drop_path1): Identity()(norm2): LayerNorm((768,), eps=1e-06, elementwise_affine=True)(mlp): Mlp((fc1): Linear(in_features=768, out_features=3072, bias=True)(act): GELU(approximate='none')(drop1): Dropout(p=0.0, inplace=False)(norm): Identity()(fc2): Linear(in_features=3072, out_features=768, bias=True)(drop2): Dropout(p=0.0, inplace=False))(ls2): Identity()(drop_path2): Identity())(8): Block((norm1): LayerNorm((768,), eps=1e-06, elementwise_affine=True)(attn): Attention((qkv): Linear(in_features=768, out_features=2304, bias=True)(q_norm): Identity()(k_norm): Identity()(attn_drop): Dropout(p=0.0, inplace=False)(norm): Identity()(proj): Linear(in_features=768, out_features=768, bias=True)(proj_drop): Dropout(p=0.0, inplace=False))(ls1): Identity()(drop_path1): Identity()(norm2): LayerNorm((768,), eps=1e-06, elementwise_affine=True)(mlp): Mlp((fc1): Linear(in_features=768, out_features=3072, bias=True)(act): GELU(approximate='none')(drop1): Dropout(p=0.0, inplace=False)(norm): Identity()(fc2): Linear(in_features=3072, out_features=768, bias=True)(drop2): Dropout(p=0.0, inplace=False))(ls2): Identity()(drop_path2): Identity())(9): Block((norm1): LayerNorm((768,), eps=1e-06, elementwise_affine=True)(attn): Attention((qkv): Linear(in_features=768, out_features=2304, bias=True)(q_norm): Identity()(k_norm): Identity()(attn_drop): Dropout(p=0.0, inplace=False)(norm): Identity()(proj): Linear(in_features=768, out_features=768, bias=True)(proj_drop): Dropout(p=0.0, inplace=False))(ls1): Identity()(drop_path1): Identity()(norm2): LayerNorm((768,), eps=1e-06, elementwise_affine=True)(mlp): Mlp((fc1): Linear(in_features=768, out_features=3072, bias=True)(act): GELU(approximate='none')(drop1): Dropout(p=0.0, inplace=False)(norm): Identity()(fc2): Linear(in_features=3072, out_features=768, bias=True)(drop2): Dropout(p=0.0, inplace=False))(ls2): Identity()(drop_path2): Identity())(10): Block((norm1): LayerNorm((768,), eps=1e-06, elementwise_affine=True)(attn): Attention((qkv): Linear(in_features=768, out_features=2304, bias=True)(q_norm): Identity()(k_norm): Identity()(attn_drop): Dropout(p=0.0, inplace=False)(norm): Identity()(proj): Linear(in_features=768, out_features=768, bias=True)(proj_drop): Dropout(p=0.0, inplace=False))(ls1): Identity()(drop_path1): Identity()(norm2): LayerNorm((768,), eps=1e-06, elementwise_affine=True)(mlp): Mlp((fc1): Linear(in_features=768, out_features=3072, bias=True)(act): GELU(approximate='none')(drop1): Dropout(p=0.0, inplace=False)(norm): Identity()(fc2): Linear(in_features=3072, out_features=768, bias=True)(drop2): Dropout(p=0.0, inplace=False))(ls2): Identity()(drop_path2): Identity())(11): Block((norm1): LayerNorm((768,), eps=1e-06, elementwise_affine=True)(attn): Attention((qkv): Linear(in_features=768, out_features=2304, bias=True)(q_norm): Identity()(k_norm): Identity()(attn_drop): Dropout(p=0.0, inplace=False)(norm): Identity()(proj): Linear(in_features=768, out_features=768, bias=True)(proj_drop): Dropout(p=0.0, inplace=False))(ls1): Identity()(drop_path1): Identity()(norm2): LayerNorm((768,), eps=1e-06, elementwise_affine=True)(mlp): Mlp((fc1): Linear(in_features=768, out_features=3072, bias=True)(act): GELU(approximate='none')(drop1): Dropout(p=0.0, inplace=False)(norm): Identity()(fc2): Linear(in_features=3072, out_features=768, bias=True)(drop2): Dropout(p=0.0, inplace=False))(ls2): Identity()(drop_path2): Identity()))(norm): LayerNorm((768,), eps=1e-06, elementwise_affine=True)(fc_norm): Identity()(head_drop): Dropout(p=0.0, inplace=False)(head): Linear(in_features=768, out_features=1000, bias=True))(act_postprocess1): Sequential((0): Identity()(1): Identity()(2): Identity())(act_postprocess2): Sequential((0): Identity()(1): Identity()(2): Identity())(act_postprocess3): Sequential((0): ProjectReadout((project): Sequential((0): Linear(in_features=1536, out_features=768, bias=True)(1): GELU(approximate='none')))(1): Transpose()(2): Unflatten(dim=2, unflattened_size=torch.Size([24, 24]))(3): Conv2d(768, 768, kernel_size=(1, 1), stride=(1, 1)))(act_postprocess4): Sequential((0): ProjectReadout((project): Sequential((0): Linear(in_features=1536, out_features=768, bias=True)(1): GELU(approximate='none')))(1): Transpose()(2): Unflatten(dim=2, unflattened_size=torch.Size([24, 24]))(3): Conv2d(768, 768, kernel_size=(1, 1), stride=(1, 1))(4): Conv2d(768, 768, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1))))(scratch): Module((layer1_rn): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(layer2_rn): Conv2d(512, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(layer3_rn): Conv2d(768, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(layer4_rn): Conv2d(768, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(refinenet1): FeatureFusionBlock_custom((out_conv): Conv2d(256, 256, kernel_size=(1, 1), stride=(1, 1))(resConfUnit1): ResidualConvUnit_custom((conv1): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(activation): ReLU()(skip_add): FloatFunctional((activation_post_process): Identity()))(resConfUnit2): ResidualConvUnit_custom((conv1): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(activation): ReLU()(skip_add): FloatFunctional((activation_post_process): Identity()))(skip_add): FloatFunctional((activation_post_process): Identity()))(refinenet2): FeatureFusionBlock_custom((out_conv): Conv2d(256, 256, kernel_size=(1, 1), stride=(1, 1))(resConfUnit1): ResidualConvUnit_custom((conv1): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(activation): ReLU()(skip_add): FloatFunctional((activation_post_process): Identity()))(resConfUnit2): ResidualConvUnit_custom((conv1): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(activation): ReLU()(skip_add): FloatFunctional((activation_post_process): Identity()))(skip_add): FloatFunctional((activation_post_process): Identity()))(refinenet3): FeatureFusionBlock_custom((out_conv): Conv2d(256, 256, kernel_size=(1, 1), stride=(1, 1))(resConfUnit1): ResidualConvUnit_custom((conv1): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(activation): ReLU()(skip_add): FloatFunctional((activation_post_process): Identity()))(resConfUnit2): ResidualConvUnit_custom((conv1): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(activation): ReLU()(skip_add): FloatFunctional((activation_post_process): Identity()))(skip_add): FloatFunctional((activation_post_process): Identity()))(refinenet4): FeatureFusionBlock_custom((out_conv): Conv2d(256, 256, kernel_size=(1, 1), stride=(1, 1))(resConfUnit1): ResidualConvUnit_custom((conv1): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(activation): ReLU()(skip_add): FloatFunctional((activation_post_process): Identity()))(resConfUnit2): ResidualConvUnit_custom((conv1): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(activation): ReLU()(skip_add): FloatFunctional((activation_post_process): Identity()))(skip_add): FloatFunctional((activation_post_process): Identity()))(output_conv): Sequential((0): Conv2d(256, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))(1): ReLU(inplace=True)(2): MultitaskHead((heads): ModuleList((0): Sequential((0): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))(1): ReLU(inplace=True)(2): Conv2d(32, 3, kernel_size=(1, 1), stride=(1, 1)))(1-2): 2 x Sequential((0): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))(1): ReLU(inplace=True)(2): Conv2d(32, 1, kernel_size=(1, 1), stride=(1, 1)))(3-4): 2 x Sequential((0): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))(1): ReLU(inplace=True)(2): Conv2d(32, 2, kernel_size=(1, 1), stride=(1, 1))))))))(loss): CrossEntropyLoss()(bce_loss): BCEWithLogitsLoss()
)

本文來自互聯網用戶投稿,該文觀點僅代表作者本人,不代表本站立場。本站僅提供信息存儲空間服務,不擁有所有權,不承擔相關法律責任。
如若轉載,請注明出處:http://www.pswp.cn/pingmian/89706.shtml
繁體地址,請注明出處:http://hk.pswp.cn/pingmian/89706.shtml
英文地址,請注明出處:http://en.pswp.cn/pingmian/89706.shtml

如若內容造成侵權/違法違規/事實不符,請聯系多彩編程網進行投訴反饋email:809451989@qq.com,一經查實,立即刪除!

相關文章

Python, C ++開發個體戶/個人品牌打造APP

個體戶/個人品牌打造APP開發方案(Python C)一、技術選型與分工1. Python- 核心場景:后端API開發、數據處理、內容管理、第三方服務集成(如社交媒體分享、支付接口)。- 優勢:開發效率高,豐富的庫…

SQLAlchemy 常見問題筆記

文章目錄SQLAlchemy Session對象如何操作數據庫SQLAlchemy非序列化對象如何返回1.問題分析2.解決方案方法1:使用 Pydantic 響應模型(推薦)方法2:手動轉換為字典(簡單快速)方法3:使用 SQLAlchemy…

Shell腳本-uniq工具

一、前言在 Linux/Unix 系統中,uniq 是一個非常實用的文本處理命令,用于對重復的行進行統計、去重和篩選。它通常與 sort 搭配使用,以實現高效的文本數據清洗與統計分析。無論是做日志分析、訪問頻率統計,還是編寫自動化腳本&…

氛圍編碼(Vice Coding)的工具選擇方式

一、前言 在寫作過程中,我受益于若干優秀的博客分享,它們給予我寶貴的啟發: 《5分鐘選對AI編輯器,每天節省2小時開發時間讓你早下班!》:https://mp.weixin.qq.com/s/f0Zm3uPTcNz30oxKwf1OQQ 二、AI編輯的…

[硬件電路-57]:根據電子元器件的受控程度,可以把電子元器件分為:不受控、半受控、完全受控三種大類

根據電子元器件的受控程度,可將其分為不受控、半受控、完全受控三大類。這種分類基于元器件的工作狀態是否需要外部信號(如電壓、電流、光、熱等)的主動調控,以及調控的精確性和靈活性。以下是具體分類及實例說明:一、…

基于Pytorch的人臉識別程序

人臉識別原理詳解人臉識別是模式識別和計算機視覺領域的重要研究方向,其目標是從圖像或視頻中識別出特定個體的身份。現代人臉識別技術主要基于深度學習方法,特別是卷積神經網絡 (CNN),下面從多個維度詳細解析其原理:1. 人臉識別的…

ubuntu 開啟ssh踩坑之旅

文章目錄確認當前用戶為普通用戶 or root命令使用ssh還是sshd服務名稱的由來apt update和apt upgrade的關系apt upgrade報錯:“E: 您在 /var/cache/apt/archives/ 上沒有足夠的可用空間”開啟ssh步驟錯誤排查查看日志修改sshd_config文件允許防火墻通過22端口確認當…

力扣:動態規劃java

sub07 線性DP - O(1) 狀態轉移2_嗶哩嗶哩_bilibili 跳樓梯 class Solution {public int climbStairs(int n) {if (n < 1) {return 1; // 處理邊界情況}int[] dp new int[n 1]; // 創建長度為n1的數組&#xff0c;比方說跳二級樓梯dp[0] 1; // 初始值設定dp[1] 1;for (…

React Native打開相冊選擇圖片或拍照 -- react-native-image-picker

官方文檔&#xff1a;https://www.npmjs.com/package/react-native-image-picker 場景&#xff1a;點擊按鈕打開相冊選擇圖片或者點擊按鈕拍照 import { launchCamera, launchImageLibrary } from react-native-image-picker;// ... <TouchableOpacityactiveOpacity{0.7}o…

USRP B210生成信號最大帶寬測試之Frank

書接上文&#xff1a; USRP B210生成LFM,SFM,BPSK,Frank信號的最大帶寬測試&#xff08;一&#xff09; USRP B210生成信號最大帶寬測試&#xff08;二&#xff09;SFM USRP B210生成信號最大帶寬測試&#xff08;三&#xff09;LFM USRP B210生成信號最大帶寬測試之BPSK …

pages.json頁面路由中,globalStyle的各個屬性

歡迎來到我的UniApp技術專欄&#xff01;&#x1f389; 在這里&#xff0c;我將與大家分享關于UniApp開發的實用技巧、最佳實踐和項目經驗。 專欄特色&#xff1a; &#x1f4f1; 跨平臺開發一站式解決方案 &#x1f680; 從入門到精通的完整學習路徑 &#x1f4a1; 實戰項目經…

[前端技術基礎]CSS選擇器沖突解決方法-由DeepSeek產生

在 CSS 中&#xff0c;當多個選擇器對同一元素的相同屬性&#xff08;如顏色&#xff09;定義發生沖突時&#xff0c;瀏覽器會通過層疊規則&#xff08;Cascading&#xff09;解決沖突。具體優先級從高到低如下&#xff1a;1. !important 規則&#xff08;最高優先級&#xff0…

解決 IDEA 中 XML 文件的 “URI is not registered” 報錯

解決 IDEA 中 XML 文件的 “URI is not registered” 報錯 在使用 IDEA 開發時&#xff0c;XML 文件&#xff08;尤其是帶有 DTD 約束的配置文件&#xff0c;如 MyBatis、Spring 配置文件&#xff09;常出現 URI is not registered (Settings | Languages & Frameworks | S…

FreeBSD Conda Python3.12下安裝GPT4Free(g4f)0.5.7.3版本

FreeBSD下不能直接安裝g4f&#xff0c;因為Curl_cffi這個庫裝不上。0.5.0.3這個版本不需要這個庫&#xff0c;所以可以安裝。 那么就沒有辦法安裝新版本了嗎&#xff1f; 有的&#xff0c;就是在linux仿真環境下。 Linux仿真環境安裝g4f 最簡單的方法是使用chroot進入linux仿…

Node.js 中基于請求 ID 實現簡單隊列(即時阻止策略/排隊等待策略)

在Node.js 中基于請求 ID 實現簡單隊列 下面示例演示兩種策略&#xff0c;以同一個請求 ID 為單位&#xff1a; 即時阻止策略&#xff1a;如果已有相同 ID 的請求在處理&#xff0c;直接報錯并返回。 排隊等待策略&#xff1a;后續相同 ID 的請求不報錯&#xff0c;而是掛起&…

詳解如何解決Mysql主從復制延遲

解決 MySQL 主從復制延遲需要從架構設計、參數調優、硬件優化等多維度綜合處理。一、根本原因分析主從延遲的本質是&#xff1a;從庫的 SQL 線程重放速度 < 主庫的寫入速度 常見瓶頸點&#xff1a;單線程回放&#xff08;MySQL 5.6 前&#xff09;從庫硬件配置低&…

Spring之事務使用指南

Spring之事務使用指南一、事務的基礎概念1.1 什么是事務&#xff1f;1.2 事務的ACID特性1.3 Spring事務的核心優勢二、Spring事務的核心配置三、事務傳播行為&#xff08;Propagation&#xff09;3.1 常用傳播行為詳解3.1.1 REQUIRED&#xff08;默認值&#xff09;3.1.2 SUPPO…

基于FPGA的多級流水線加法器verilog實現,包含testbench測試文件

目錄 1.課題概述 2.系統仿真結果 3.核心程序 4.系統原理簡介 5.參考文獻 6.完整工程文件 1.課題概述 流水線&#xff08;Pipeline&#xff09;技術源于工業生產中的裝配線理念&#xff0c;在數字電路中&#xff0c;它將一個復雜運算任務分解為若干個子任務&#xff0c;每…

5.1.4習題精講

一、單項選擇題 01. 下列部件不屬于控制器的是&#xff08; C &#xff09;。 題目原文 下列部件不屬于控制器的是&#xff08; &#xff09;。 A. 指令寄存器 B. 程序計數器 C. 程序狀態字寄存器 D. 時序電路 正確答案&#xff1a;C 題目解析 考點分析&#xff1a; 本題考察CP…

華為云Flexus+DeepSeek征文|低代碼 × 強推理:華為云 Flexus 搭建可部署的 AI Agent 實踐方案【搭建寵物養護小知識AI助手】

文章目錄華為云FlexusDeepSeek征文&#xff5c;低代碼 強推理&#xff1a;華為云 Flexus 搭建可部署的 AI Agent 實踐方案【搭建寵物養護小知識AI助手】&#x1f680; 引言一、核心技術概覽1. 華為云 Flexus X2. DeepSeek-R1 模型3. Dify 平臺二、總體架構設計三、環境準備與資…