mongodb源代碼分析createCollection命令創建Collection流程分析

MongoDB 提供兩種方式創建集合:隱式創建?和?顯式創建

方式 1:隱式創建(推薦)

當你向不存在的集合中插入文檔時,MongoDB 會自動創建該集合。

示例

在 db中隱式創建?users?集合:

javascript

db.users.insertOne({ name: "Alice", age: 30 })
方式 2:顯式創建(自定義配置)

使用?createCollection()?方法手動創建集合,并可指定配置選項(如文檔大小限制、索引等)。

命令語法

javascript

db.createCollection(<集合名>, { <選項> })
常用選項
  • capped: 是否為固定大小集合(默認?false)。
  • size: 固定大小集合的最大字節數。
  • max: 固定大小集合的最大文檔數量。

mongo/db/commands/dbcommands.cpp中CmdCreate對象執行創建集合動作

/* create collection */
class CmdCreate : public BasicCommand {
public:CmdCreate() : BasicCommand("create") {}AllowedOnSecondary secondaryAllowed(ServiceContext*) const override {return AllowedOnSecondary::kNever;}virtual bool adminOnly() const {return false;}virtual bool supportsWriteConcern(const BSONObj& cmd) const override {return true;}std::string help() const override {return str::stream()<< "explicitly creates a collection or view\n"<< "{\n"<< "  create: <string: collection or view name> [,\n"<< "  capped: <bool: capped collection>,\n"<< "  autoIndexId: <bool: automatic creation of _id index>,\n"<< "  idIndex: <document: _id index specification>,\n"<< "  size: <int: size in bytes of the capped collection>,\n"<< "  max: <int: max number of documents in the capped collection>,\n"<< "  storageEngine: <document: storage engine configuration>,\n"<< "  validator: <document: validation rules>,\n"<< "  validationLevel: <string: validation level>,\n"<< "  validationAction: <string: validation action>,\n"<< "  indexOptionDefaults: <document: default configuration for indexes>,\n"<< "  viewOn: <string: name of source collection or view>,\n"<< "  pipeline: <array<object>: aggregation pipeline stage>,\n"<< "  collation: <document: default collation for the collection or view>,\n"<< "  writeConcern: <document: write concern expression for the operation>]\n"<< "}";}virtual Status checkAuthForCommand(Client* client,const std::string& dbname,const BSONObj& cmdObj) const {const NamespaceString nss(parseNs(dbname, cmdObj));return AuthorizationSession::get(client)->checkAuthForCreate(nss, cmdObj, false);}virtual bool run(OperationContext* opCtx,const string& dbname,const BSONObj& cmdObj,BSONObjBuilder& result) {IDLParserErrorContext ctx("create");CreateCommand cmd = CreateCommand::parse(ctx, cmdObj);const NamespaceString ns = cmd.getNamespace();if (cmd.getAutoIndexId()) {const char* deprecationWarning ="the autoIndexId option is deprecated and will be removed in a future release";warning() << deprecationWarning;result.append("note", deprecationWarning);}// Ensure that the 'size' field is present if 'capped' is set to true.if (cmd.getCapped()) {uassert(ErrorCodes::InvalidOptions,str::stream() << "the 'size' field is required when 'capped' is true",cmd.getSize());}// If the 'size' or 'max' fields are present, then 'capped' must be set to true.if (cmd.getSize() || cmd.getMax()) {uassert(ErrorCodes::InvalidOptions,str::stream() << "the 'capped' field needs to be true when either the 'size'"<< " or 'max' fields are present",cmd.getCapped());}// The 'temp' field is only allowed to be used internally and isn't available to clients.if (cmd.getTemp()) {uassert(ErrorCodes::InvalidOptions,str::stream() << "the 'temp' field is an invalid option",opCtx->getClient()->isInDirectClient() ||(opCtx->getClient()->session()->getTags() |transport::Session::kInternalClient));}// Validate _id index spec and fill in missing fields.if (cmd.getIdIndex()) {auto idIndexSpec = *cmd.getIdIndex();uassert(ErrorCodes::InvalidOptions,str::stream() << "'idIndex' is not allowed with 'viewOn': " << idIndexSpec,!cmd.getViewOn());uassert(ErrorCodes::InvalidOptions,str::stream() << "'idIndex' is not allowed with 'autoIndexId': " << idIndexSpec,!cmd.getAutoIndexId());// Perform index spec validation.idIndexSpec = uassertStatusOK(index_key_validate::validateIndexSpec(opCtx, idIndexSpec, serverGlobalParams.featureCompatibility));uassertStatusOK(index_key_validate::validateIdIndexSpec(idIndexSpec));// Validate or fill in _id index collation.std::unique_ptr<CollatorInterface> defaultCollator;if (cmd.getCollation()) {auto collatorStatus = CollatorFactoryInterface::get(opCtx->getServiceContext())->makeFromBSON(*cmd.getCollation());uassertStatusOK(collatorStatus.getStatus());defaultCollator = std::move(collatorStatus.getValue());}idIndexSpec = uassertStatusOK(index_key_validate::validateIndexSpecCollation(opCtx, idIndexSpec, defaultCollator.get()));std::unique_ptr<CollatorInterface> idIndexCollator;if (auto collationElem = idIndexSpec["collation"]) {auto collatorStatus = CollatorFactoryInterface::get(opCtx->getServiceContext())->makeFromBSON(collationElem.Obj());// validateIndexSpecCollation() should have checked that the _id index collation// spec is valid.invariant(collatorStatus.isOK());idIndexCollator = std::move(collatorStatus.getValue());}if (!CollatorInterface::collatorsMatch(defaultCollator.get(), idIndexCollator.get())) {uasserted(ErrorCodes::BadValue,"'idIndex' must have the same collation as the collection.");}// Remove "idIndex" field from command.auto resolvedCmdObj = cmdObj.removeField("idIndex");uassertStatusOK(createCollection(opCtx, dbname, resolvedCmdObj, idIndexSpec));return true;}BSONObj idIndexSpec;uassertStatusOK(createCollection(opCtx, dbname, cmdObj, idIndexSpec));return true;}
} cmdCreate;

CmdCreate核心方法是run,run方法先解析CreateCommand::parse命令;參數驗證;createCollection創建Collection;

mongo/db/catalog/create_collection.cpp中createCollection(4個參數)方法:

Status createCollection(OperationContext* opCtx,const std::string& dbName,const BSONObj& cmdObj,const BSONObj& idIndex) {return createCollection(opCtx,CommandHelpers::parseNsCollectionRequired(dbName, cmdObj),cmdObj,idIndex,CollectionOptions::parseForCommand);
}

mongo/db/catalog/create_collection.cpp中createCollection(5個參數)方法:

/*** Shared part of the implementation of the createCollection versions for replicated and regular* collection creation.*/
Status createCollection(OperationContext* opCtx,const NamespaceString& nss,const BSONObj& cmdObj,const BSONObj& idIndex,CollectionOptions::ParseKind kind) {BSONObjIterator it(cmdObj);// Skip the first cmdObj element.BSONElement firstElt = it.next();invariant(firstElt.fieldNameStringData() == "create");Status status = userAllowedCreateNS(nss.db(), nss.coll());if (!status.isOK()) {return status;}// Build options object from remaining cmdObj elements.BSONObjBuilder optionsBuilder;while (it.more()) {const auto elem = it.next();if (!isGenericArgument(elem.fieldNameStringData()))optionsBuilder.append(elem);if (elem.fieldNameStringData() == "viewOn") {// Views don't have UUIDs so it should always be parsed for command.kind = CollectionOptions::parseForCommand;}}BSONObj options = optionsBuilder.obj();uassert(14832,"specify size:<n> when capped is true",!options["capped"].trueValue() || options["size"].isNumber());CollectionOptions collectionOptions;{StatusWith<CollectionOptions> statusWith = CollectionOptions::parse(options, kind);if (!statusWith.isOK()) {return statusWith.getStatus();}collectionOptions = statusWith.getValue();}if (collectionOptions.isView()) {return _createView(opCtx, nss, collectionOptions, idIndex);} else {return _createCollection(opCtx, nss, collectionOptions, idIndex);}
}

mongo/db/catalog/create_collection.cpp中userAllowedCreateNS(nss.db(), nss.coll())驗證數據庫名字和集合的名字是否合法,判斷是否和系統名字沖突。比如system.users;system.version;system.role等;

Status userAllowedCreateNS(StringData db, StringData coll) {// validity checkingif (db.size() == 0)return Status(ErrorCodes::InvalidNamespace, "db cannot be blank");if (!NamespaceString::validDBName(db, NamespaceString::DollarInDbNameBehavior::Allow))return Status(ErrorCodes::InvalidNamespace, "invalid db name");if (coll.size() == 0)return Status(ErrorCodes::InvalidNamespace, "collection cannot be blank");if (!NamespaceString::validCollectionName(coll))return Status(ErrorCodes::InvalidNamespace, "invalid collection name");if (!NamespaceString(db, coll).checkLengthForFCV())return Status(ErrorCodes::IncompatibleServerVersion,str::stream() << "Cannot create collection with a long name " << db << "."<< coll << " - upgrade to feature compatibility version "<< FeatureCompatibilityVersionParser::kVersion44<< " to be able to do so.");// check special areasif (db == "system")return Status(ErrorCodes::InvalidNamespace, "cannot use 'system' database");if (coll.startsWith("system.")) {if (coll == "system.js")return Status::OK();if (coll == "system.profile")return Status::OK();if (coll == "system.users")return Status::OK();if (coll == DurableViewCatalog::viewsCollectionName())return Status::OK();if (db == "admin") {if (coll == "system.version")return Status::OK();if (coll == "system.roles")return Status::OK();if (coll == "system.new_users")return Status::OK();if (coll == "system.backup_users")return Status::OK();if (coll == "system.keys")return Status::OK();}if (db == "config") {if (coll == "system.sessions")return Status::OK();if (coll == "system.indexBuilds")return Status::OK();}if (db == "local") {if (coll == "system.replset")return Status::OK();if (coll == "system.healthlog")return Status::OK();}return Status(ErrorCodes::InvalidNamespace,str::stream() << "cannot write to '" << db << "." << coll << "'");}

CollectionOptions::parse(options, kind)解析集合參數;

_createView創建視圖代碼:

_createCollection創建集合代碼;

Status _createCollection(OperationContext* opCtx,const NamespaceString& nss,const CollectionOptions& collectionOptions,const BSONObj& idIndex) {return writeConflictRetry(opCtx, "create", nss.ns(), [&] {AutoGetOrCreateDb autoDb(opCtx, nss.db(), MODE_IX);Lock::CollectionLock collLock(opCtx, nss, MODE_X);AutoStatsTracker statsTracker(opCtx,nss,Top::LockType::NotLocked,AutoStatsTracker::LogMode::kUpdateTopAndCurop,autoDb.getDb()->getProfilingLevel());if (opCtx->writesAreReplicated() &&!repl::ReplicationCoordinator::get(opCtx)->canAcceptWritesFor(opCtx, nss)) {return Status(ErrorCodes::NotMaster,str::stream() << "Not primary while creating collection " << nss);}WriteUnitOfWork wunit(opCtx);Status status = autoDb.getDb()->userCreateNS(opCtx, nss, collectionOptions, true, idIndex);if (!status.isOK()) {return status;}wunit.commit();return Status::OK();});
}

使用 writeConflictRetry 模板處理寫沖突自動重試;

AutoGetOrCreateDb autoDb獲取db數據庫,如果沒有就創建,有就直接返回;

使用 CollectionLock 獲取集合的排它鎖 (MODE_X);

數據庫對象調用mongo/db/catalog/database_impl.cpp中userCreateNS執行實際創建邏輯;

Status DatabaseImpl::userCreateNS(OperationContext* opCtx,const NamespaceString& nss,CollectionOptions collectionOptions,bool createDefaultIndexes,const BSONObj& idIndex) const {// 記錄創建集合的日志LOG(1) << "create collection " << nss << ' ' << collectionOptions.toBSON();// 驗證命名空間合法性if (!NamespaceString::validCollectionComponent(nss.ns()))return Status(ErrorCodes::InvalidNamespace, str::stream() << "invalid ns: " << nss);// 檢查集合是否已存在Collection* collection = CollectionCatalog::get(opCtx).lookupCollectionByNamespace(nss);if (collection)return Status(ErrorCodes::NamespaceExists,str::stream() << "a collection '" << nss << "' already exists");// 檢查視圖是否已存在if (ViewCatalog::get(this)->lookup(opCtx, nss.ns()))return Status(ErrorCodes::NamespaceExists,str::stream() << "a view '" << nss << "' already exists");// 處理排序規則(collation)std::unique_ptr<CollatorInterface> collator;if (!collectionOptions.collation.isEmpty()) {auto collatorWithStatus = CollatorFactoryInterface::get(opCtx->getServiceContext())->makeFromBSON(collectionOptions.collation);if (!collatorWithStatus.isOK()) {return collatorWithStatus.getStatus();}collator = std::move(collatorWithStatus.getValue());collectionOptions.collation = collator ? collator->getSpec().toBSON() : BSONObj();}// 驗證文檔驗證器(validator)表達式if (!collectionOptions.validator.isEmpty()) {boost::intrusive_ptr<ExpressionContext> expCtx(new ExpressionContext(opCtx, collator.get()));const auto currentFCV = serverGlobalParams.featureCompatibility.getVersion();if (serverGlobalParams.validateFeaturesAsMaster.load() &&currentFCV != ServerGlobalParams::FeatureCompatibility::Version::kFullyUpgradedTo44) {expCtx->maxFeatureCompatibilityVersion = currentFCV;}expCtx->isParsingCollectionValidator = true;auto statusWithMatcher =MatchExpressionParser::parse(collectionOptions.validator, std::move(expCtx));if (!statusWithMatcher.isOK()) {return statusWithMatcher.getStatus();}}// 驗證集合存儲引擎選項Status status = validateStorageOptions(opCtx->getServiceContext(),collectionOptions.storageEngine,[](const auto& x, const auto& y) { return x->validateCollectionStorageOptions(y); });if (!status.isOK())return status;// 驗證索引存儲引擎選項if (auto indexOptions = collectionOptions.indexOptionDefaults["storageEngine"]) {status = validateStorageOptions(opCtx->getServiceContext(), indexOptions.Obj(), [](const auto& x, const auto& y) {return x->validateIndexStorageOptions(y);});if (!status.isOK()) {return status;}}// 根據類型創建集合或視圖if (collectionOptions.isView()) {uassertStatusOK(createView(opCtx, nss, collectionOptions));} else {invariant(createCollection(opCtx, nss, collectionOptions, createDefaultIndexes, idIndex),str::stream() << "Collection creation failed after validating options: " << nss<< ". Options: " << collectionOptions.toBSON());}return Status::OK();
}

CollectionCatalog::get(opCtx).lookupCollectionByNamespace(nss)根據集合名字查找是否存在對應的集合對象Collection;

mongo/db/catalog/database_impl.cpp中createCollection繼續創建集合

Collection* DatabaseImpl::createCollection(OperationContext* opCtx,const NamespaceString& nss,const CollectionOptions& options,bool createIdIndex,const BSONObj& idIndex) const {// 前置條件檢查invariant(!options.isView());invariant(opCtx->lockState()->isDbLockedForMode(name(), MODE_IX));// 檢查是否允許隱式創建集合uassert(CannotImplicitlyCreateCollectionInfo(nss),"request doesn't allow collection to be created implicitly",serverGlobalParams.clusterRole != ClusterRole::ShardServer ||OperationShardingState::get(opCtx).allowImplicitCollectionCreation() ||options.temp);// 檢查是否可以接受寫操作auto coordinator = repl::ReplicationCoordinator::get(opCtx);bool canAcceptWrites =(coordinator->getReplicationMode() != repl::ReplicationCoordinator::modeReplSet) ||coordinator->canAcceptWritesForDatabase(opCtx, nss.db()) || nss.isSystemDotProfile();// 處理集合UUIDCollectionOptions optionsWithUUID = options;bool generatedUUID = false;if (!optionsWithUUID.uuid) {if (!canAcceptWrites) {uasserted(ErrorCodes::InvalidOptions, "Attempted to create a new collection without a UUID");} else {optionsWithUUID.uuid.emplace(CollectionUUID::gen());generatedUUID = true;}}// 預留oplog槽位,用于保證復制一致性OplogSlot createOplogSlot;if (canAcceptWrites && supportsDocLocking() && !coordinator->isOplogDisabledFor(opCtx, nss)) {createOplogSlot = repl::getNextOpTime(opCtx);}// 內部故障注入點(用于測試)if (MONGO_unlikely(hangAndFailAfterCreateCollectionReservesOpTime.shouldFail())) {hangAndFailAfterCreateCollectionReservesOpTime.pauseWhileSet(opCtx);uasserted(51267, "hangAndFailAfterCreateCollectionReservesOpTime fail point enabled");}// 檢查是否可以創建集合_checkCanCreateCollection(opCtx, nss, optionsWithUUID);audit::logCreateCollection(&cc(), nss.ns());// 記錄創建集合日志log() << "createCollection: " << nss << " with " << (generatedUUID ? "generated" : "provided")<< " UUID: " << optionsWithUUID.uuid.get() << " and options: " << options.toBSON();// 創建底層存儲結構auto storageEngine = opCtx->getServiceContext()->getStorageEngine();std::pair<RecordId, std::unique_ptr<RecordStore>> catalogIdRecordStorePair =uassertStatusOK(storageEngine->getCatalog()->createCollection(opCtx, nss, optionsWithUUID, true /*allocateDefaultSpace*/));// 創建集合對象auto catalogId = catalogIdRecordStorePair.first;std::unique_ptr<Collection> ownedCollection =Collection::Factory::get(opCtx)->make(opCtx,nss,catalogId,optionsWithUUID.uuid.get(),std::move(catalogIdRecordStorePair.second));auto collection = ownedCollection.get();ownedCollection->init(opCtx);// 設置提交回調,確保集合可見性opCtx->recoveryUnit()->onCommit([collection](auto commitTime) {if (commitTime)collection->setMinimumVisibleSnapshot(commitTime.get());});// 注冊集合到Catalogauto& catalog = CollectionCatalog::get(opCtx);auto uuid = ownedCollection->uuid();catalog.registerCollection(uuid, std::move(ownedCollection));opCtx->recoveryUnit()->onRollback([uuid, &catalog] { catalog.deregisterCollection(uuid); });// 創建_id索引BSONObj fullIdIndexSpec;if (createIdIndex && collection->requiresIdIndex()) {if (optionsWithUUID.autoIndexId == CollectionOptions::YES ||optionsWithUUID.autoIndexId == CollectionOptions::DEFAULT) {IndexCatalog* ic = collection->getIndexCatalog();fullIdIndexSpec = uassertStatusOK(ic->createIndexOnEmptyCollection(opCtx, !idIndex.isEmpty() ? idIndex : ic->getDefaultIdIndexSpec()));} else {uassert(50001,"autoIndexId:false is not allowed for replicated collections",!nss.isReplicated());}}// 內部測試故障注入點hangBeforeLoggingCreateCollection.pauseWhileSet();// 觸發創建集合的觀察者事件opCtx->getServiceContext()->getOpObserver()->onCreateCollection(opCtx, collection, nss, optionsWithUUID, fullIdIndexSpec, createOplogSlot);// 為系統集合創建額外索引if (canAcceptWrites && createIdIndex && nss.isSystem()) {createSystemIndexes(opCtx, collection);}return collection;
}

auto storageEngine = opCtx->getServiceContext()->getStorageEngine();獲取存儲引擎;

storageEngine->getCatalog()->createCollection繼續創建集合,storage是與底層存儲引擎打交道的一層,MongoDB在設計上也是支持不同的存儲引擎的,不同的引擎都需要在storage進行實現(講道理完全可以做一個內存數據庫),而MongoDB默認支持的就是Wiredtiger存儲引擎。

ownedCollection->init(opCtx)集合對象進行初始化;

catalog.registerCollection(uuid, std::move(ownedCollection))注冊集合到Catalog;

ic->createIndexOnEmptyCollection(opCtx, !idIndex.isEmpty() ? idIndex : ic->getDefaultIdIndexSpec()常見默認索引_id;

在catalog可以只管創建collection需要做什么,而到了storage就需要管如何創建collection了。接下來需要看看storage是如何與Wiredtiger打交道,完成創建Collection的。

mongo/db/storage/durable_catalog_impl.cpp

StatusWith<std::pair<RecordId, std::unique_ptr<RecordStore>>> DurableCatalogImpl::createCollection(OperationContext* opCtx,const NamespaceString& nss,const CollectionOptions& options,bool allocateDefaultSpace) {// 前置條件檢查:確保數據庫已獲取意向排它鎖(MODE_IX)invariant(opCtx->lockState()->isDbLockedForMode(nss.db(), MODE_IX));invariant(nss.coll().size() > 0); // 集合名非空// 檢查集合是否已存在(通過內存Catalog快速校驗)if (CollectionCatalog::get(opCtx).lookupCollectionByNamespace(nss)) {return Status(ErrorCodes::NamespaceExists, "collection already exists " + nss);}// 分配鍵值存儲前綴(KVPrefix),用于底層KV存儲的鍵空間隔離KVPrefix prefix = KVPrefix::getNextPrefix(nss);// 持久化集合元數據到存儲引擎StatusWith<Entry> swEntry = _addEntry(opCtx, nss, options, prefix);if (!swEntry.isOK()) return swEntry.getStatus();Entry& entry = swEntry.getValue(); // Entry包含UUID、prefix、catalogId等元數據// 調用存儲引擎創建數據存儲實體(RecordStore)Status status = _engine->getEngine()->createGroupedRecordStore(opCtx, nss.ns(), entry.ident, options, prefix);if (!status.isOK()) return status;// 標記 collation 特性已使用(用于存儲引擎特性追蹤)if (!options.collation.isEmpty()) {const auto feature = DurableCatalogImpl::FeatureTracker::NonRepairableFeature::kCollation;if (getFeatureTracker()->isNonRepairableFeatureInUse(opCtx, feature)) {getFeatureTracker()->markNonRepairableFeatureAsInUse(opCtx, feature);}}// 注冊回滾鉤子:若事務回滾,刪除已創建的存儲實體opCtx->recoveryUnit()->onRollback([opCtx, catalog = this, nss, ident = entry.ident, uuid = options.uuid.get()]() {catalog->_engine->getEngine()->dropIdent(opCtx, ident).ignore(); // 忽略刪除失敗});// 獲取剛創建的 RecordStore(存儲引擎中的數據容器)auto rs = _engine->getEngine()->getGroupedRecordStore(opCtx, nss.ns(), entry.ident, options, prefix);invariant(rs); // 確保存儲實例不為空// 返回目錄ID(RecordId)和存儲實例return std::make_pair(entry.catalogId, std::move(rs));
}

持久化集合元數據到存儲引擎StatusWith<Entry> swEntry = _addEntry(opCtx, nss, options, prefix);??記錄collection的meta到系統集合中(這樣MongoDB才能通過show collections命令查看所有collection)

調用存儲引擎創建數據存儲實體(RecordStore)?Status status = _engine->getEngine()->createGroupedRecordStore(?opCtx, nss.ns(), entry.ident, options, prefix);

mongo/db/storage/durable_catalog_impl.cpp中_addEntry(opCtx, nss, options, prefix)代碼:

StatusWith<DurableCatalog::Entry> DurableCatalogImpl::_addEntry(OperationContext* opCtx,NamespaceString nss,const CollectionOptions& options,KVPrefix prefix) {// 前置條件:持有數據庫意向排它鎖(MODE_IX)invariant(opCtx->lockState()->isDbLockedForMode(nss.db(), MODE_IX));// 生成唯一的存儲標識(ident),例如:"5.1" 對應 "db.coll"const string ident = _newUniqueIdent(nss, "collection");// 構建元數據BSON文檔BSONObj obj;{BSONObjBuilder b;b.append("ns", nss.ns());          // 命名空間(如 "db.coll")b.append("ident", ident);          // 存儲引擎中的唯一標識BSONCollectionCatalogEntry::MetaData md;md.ns = nss.ns();md.options = options;md.prefix = prefix;                // 鍵前綴(用于KV存儲的鍵隔離)b.append("md", md.toBSON());       // 序列化元數據obj = b.obj();}// 將元數據寫入Catalog的RecordStore(底層存儲)StatusWith<RecordId> res = _rs->insertRecord(opCtx, obj.objdata(), obj.objsize(), Timestamp() // 時間戳,可選,此處可能為默認值);if (!res.isOK()) return res.getStatus(); // 寫入失敗,返回錯誤// 維護內存中的目錄映射(線程安全:通過Latch加鎖)stdx::lock_guard<Latch> lk(_catalogIdToEntryMapLock);RecordId catalogId = res.getValue(); // RecordId是元數據在存儲中的唯一ID// 確保目錄ID未被占用invariant(_catalogIdToEntryMap.find(catalogId) == _catalogIdToEntryMap.end());// 存儲映射關系:catalogId → {catalogId, ident, nss}_catalogIdToEntryMap[catalogId] = {catalogId, ident, nss};// 注冊事務變更:若回滾,需從內存映射中移除該條目opCtx->recoveryUnit()->registerChange(std::make_unique<AddIdentChange>(this, catalogId));// 日志記錄LOG(1) << "stored meta data for " << nss.ns() << " @ " << catalogId;// 返回包含元數據的Entry結構體return {{catalogId, ident, nss}};
}

DurableCatalogImpl::_addEntry 是createCollection的關鍵子函數,主要流程包括:

生成唯一存儲標識(ident):為集合在存儲引擎中分配唯一名稱(如 WiredTiger 的表名collection-0--9135487495984222338),例如下面的截圖


構建元數據文檔:將集合信息(命名空間、選項、鍵前綴等)序列化為 BSON 格式,{ ns: "db.conca", ident: "collection-0--8262702921578327518", md: }
寫入底層存儲:?_rs->insertRecord將元數據文檔插入到 Catalog 的記錄存儲(RecordStore)中,寫入到系統表_uri:table:_mdb_catalog中。

/*** A thin wrapper around insertRecords() to simplify handling of single document inserts.*/StatusWith<RecordId> insertRecord(OperationContext* opCtx,const char* data,int len,Timestamp timestamp) {std::vector<Record> inOutRecords{Record{RecordId(), RecordData(data, len)}};Status status = insertRecords(opCtx, &inOutRecords, std::vector<Timestamp>{timestamp});if (!status.isOK())return status;return inOutRecords.front().id;}

mongo/db/storage/wiredtiger/wiredtiger_record_store.cpp的_insertRecords將集合信息 JSON對象{ ns: "db2.conca", ident: "collection-0--9135487495984222338", md: {...}}寫入到系統表_mdb_catalog中,_mdb_catalog存儲的是表和索引的元數據信息。

Status WiredTigerRecordStore::insertRecords(OperationContext* opCtx,std::vector<Record>* records,const std::vector<Timestamp>& timestamps) {return _insertRecords(opCtx, records->data(), timestamps.data(), records->size());
}Status WiredTigerRecordStore::_insertRecords(OperationContext* opCtx,Record* records,const Timestamp* timestamps,size_t nRecords) {dassert(opCtx->lockState()->isWriteLocked());// We are kind of cheating on capped collections since we write all of them at once ....// Simplest way out would be to just block vector writes for everything except oplog ?int64_t totalLength = 0;for (size_t i = 0; i < nRecords; i++)totalLength += records[i].data.size();// caller will retry one element at a timeif (_isCapped && totalLength > _cappedMaxSize)return Status(ErrorCodes::BadValue, "object to insert exceeds cappedMaxSize");LOG(1) << "conca WiredTigerRecordStore::insertRecords _uri:" << _uri;LOG(1) << "conca WiredTigerRecordStore::insertRecords _tableId:" << _tableId;WiredTigerCursor curwrap(_uri, _tableId, true, opCtx);curwrap.assertInActiveTxn();WT_CURSOR* c = curwrap.get();invariant(c);RecordId highestId = RecordId();dassert(nRecords != 0);for (size_t i = 0; i < nRecords; i++) {auto& record = records[i];if (_isOplog) {StatusWith<RecordId> status =oploghack::extractKey(record.data.data(), record.data.size());if (!status.isOK())return status.getStatus();record.id = status.getValue();} else {record.id = _nextId(opCtx);}dassert(record.id > highestId);highestId = record.id;}for (size_t i = 0; i < nRecords; i++) {auto& record = records[i];Timestamp ts;if (timestamps[i].isNull() && _isOplog) {// If the timestamp is 0, that probably means someone inserted a document directly// into the oplog.  In this case, use the RecordId as the timestamp, since they are// one and the same. Setting this transaction to be unordered will trigger a journal// flush. Because these are direct writes into the oplog, the machinery to trigger a// journal flush is bypassed. A followup oplog read will require a fresh visibility// value to make progress.ts = Timestamp(record.id.repr());opCtx->recoveryUnit()->setOrderedCommit(false);} else {ts = timestamps[i];}if (!ts.isNull()) {LOG(4) << "inserting record with timestamp " << ts;fassert(39001, opCtx->recoveryUnit()->setTimestamp(ts));}setKey(c, record.id);WiredTigerItem value(record.data.data(), record.data.size());c->set_value(c, value.Get());int ret = WT_OP_CHECK(c->insert(c));if (ret)return wtRCToStatus(ret, "WiredTigerRecordStore::insertRecord");}_changeNumRecords(opCtx, nRecords);_increaseDataSize(opCtx, totalLength);if (_oplogStones) {_oplogStones->updateCurrentStoneAfterInsertOnCommit(opCtx, totalLength, highestId, nRecords);} else {_cappedDeleteAsNeeded(opCtx, highestId);}return Status::OK();
}

維護內存映射:在內存中建立 “目錄 ID(RecordId)→ 集合元數據” 的映射關系,加速后續查詢。
注冊事務變更:確保元數據寫入可參與事務回滾,保證原子性。

mongo/db/storage/wiredtiger/wiredtiger_kv_engine.h

    Status createRecordStore(OperationContext* opCtx,StringData ns,StringData ident,const CollectionOptions& options) override {return createGroupedRecordStore(opCtx, ns, ident, options, KVPrefix::kNotPrefixed);}

mongo/db/storage/wiredtiger/wiredtiger_kv_engine.cpp?


Status WiredTigerKVEngine::createGroupedRecordStore(OperationContext* opCtx,StringData ns,StringData ident,const CollectionOptions& options,KVPrefix prefix) {_ensureIdentPath(ident);WiredTigerSession session(_conn);const bool prefixed = prefix.isPrefixed();StatusWith<std::string> result = WiredTigerRecordStore::generateCreateString(_canonicalName, ns, options, _rsOptions, prefixed);if (!result.isOK()) {return result.getStatus();}std::string config = result.getValue();string uri = _uri(ident);WT_SESSION* s = session.getSession();LOG(2) << "WiredTigerKVEngine::createRecordStore ns: " << ns << " uri: " << uri<< " config: " << config;return wtRCToStatus(s->create(s, uri.c_str(), config.c_str()));
}
_ensureIdentPath(ident);確保存儲標識(ident)對應的物理路徑存在。若 ident 為 collection-0--9135487495984222338,對應文件路徑為 /data/collection-0--9135487495984222338.wt,需確保 /data存在。

string uri = _uri(ident); // 轉換為 "table:collection-0--9135487495984222338" WT_SESSION* s = session.getSession(); s->create(s, uri.c_str(), config.c_str());

URI 格式:_uri 將 ident 轉換為 WiredTiger 資源標識符,如 table:collection-0--9135487495984222338。
WiredTiger API 調用:通過 WT_SESSION::create 執行底層表創建,參數為 URI 和配置字符串。
錯誤處理:wtRCToStatus 將 WiredTiger 返回碼轉換為 MongoDB 的 Status 對象。

> db.createCollection('conca', {})創建conca集合,mongo打印日志如下:

2025-05-21T11:59:39.452+0800 D1 COMMAND  [conn1] conca findCommand create|
2025-05-21T11:59:39.452+0800 D1 COMMAND  [conn1] run command db2.$cmd { create: "conca", lsid: { id: UUID("e50ec2ba-3fe4-4d2f-990c-291ce2a25bdd") }, $db: "db2" }
2025-05-21T11:59:39.453+0800 D1 COMMAND  [conn1] conca runCommandImpl
2025-05-21T11:59:39.453+0800 D1 COMMAND  [conn1] conca invocation->run 1
2025-05-21T11:59:39.455+0800 D1 -        [conn1] reloading view catalog for database db2
2025-05-21T11:59:39.455+0800 D1 STORAGE  [conn1] create collection db2.conca {}
2025-05-21T11:59:39.456+0800 I  STORAGE  [conn1] createCollection: db2.conca with generated UUID: 4ce9d174-a254-442b-9d24-90fa114fa669 and options: {}
2025-05-21T11:59:39.456+0800 D1 STORAGE  [conn1] conca _addEntry ident:collection-0--9135487495984222338
2025-05-21T11:59:39.459+0800 D3 STORAGE  [conn1] WT begin_transaction for snapshot id 1678
2025-05-21T11:59:39.460+0800 D2 STORAGE  [conn1] WiredTigerSizeStorer::store Marking table:_mdb_catalog dirty, numRecords: 6, dataSize: 2801, use_count: 3
2025-05-21T11:59:39.460+0800 D1 STORAGE  [conn1] conca _addEntry res.getValue():RecordId(6)
2025-05-21T11:59:39.460+0800 D1 STORAGE  [conn1] stored meta data for db2.conca @ RecordId(6)
2025-05-21T11:59:39.461+0800 D2 STORAGE  [conn1] WiredTigerKVEngine::createRecordStore ns: db2.conca uri: table:collection-0--9135487495984222338 config: type=file,memory_page_max=10m,split_pct=90,leaf_value_max=64MB,checksum=on,block_compressor=snappy,,key_format=q,value_format=u,app_metadata=(formatVersion=1),log=(enabled=true)
2025-05-21T11:59:39.466+0800 D2 STORAGE  [conn1] WiredTigerUtil::checkApplicationMetadataFormatVersion  uri: table:collection-0--9135487495984222338 ok range 1 -> 1 current: 1
2025-05-21T11:59:39.467+0800 D3 STORAGE  [conn1] looking up metadata for: RecordId(6)
2025-05-21T11:59:39.467+0800 D3 STORAGE  [conn1]  fetched CCE metadata: { ns: "db2.conca", ident: "collection-0--9135487495984222338", md: { ns: "db2.conca", options: { uuid: UUID("4ce9d174-a254-442b-9d24-90fa114fa669") }, indexes: [], prefix: -1 } }
2025-05-21T11:59:39.468+0800 D3 STORAGE  [conn1] returning metadata: md: { ns: "db2.conca", options: { uuid: UUID("4ce9d174-a254-442b-9d24-90fa114fa669") }, indexes: [], prefix: -1 }
2025-05-21T11:59:39.468+0800 D3 STORAGE  [conn1] looking up metadata for: RecordId(6)
2025-05-21T11:59:39.469+0800 D3 STORAGE  [conn1]  fetched CCE metadata: { ns: "db2.conca", ident: "collection-0--9135487495984222338", md: { ns: "db2.conca", options: { uuid: UUID("4ce9d174-a254-442b-9d24-90fa114fa669") }, indexes: [], prefix: -1 } }
2025-05-21T11:59:39.469+0800 D3 STORAGE  [conn1] returning metadata: md: { ns: "db2.conca", options: { uuid: UUID("4ce9d174-a254-442b-9d24-90fa114fa669") }, indexes: [], prefix: -1 }
2025-05-21T11:59:39.470+0800 D1 STORAGE  [conn1] db2.conca: clearing plan cache - collection info cache reset
2025-05-21T11:59:39.470+0800 D1 STORAGE  [conn1] Registering collection db2.conca with UUID 4ce9d174-a254-442b-9d24-90fa114fa669
2025-05-21T11:59:39.471+0800 D3 STORAGE  [conn1] looking up metadata for: RecordId(6)
2025-05-21T11:59:39.471+0800 D3 STORAGE  [conn1]  fetched CCE metadata: { ns: "db2.conca", ident: "collection-0--9135487495984222338", md: { ns: "db2.conca", options: { uuid: UUID("4ce9d174-a254-442b-9d24-90fa114fa669") }, indexes: [], prefix: -1 } }
2025-05-21T11:59:39.472+0800 D3 STORAGE  [conn1] returning metadata: md: { ns: "db2.conca", options: { uuid: UUID("4ce9d174-a254-442b-9d24-90fa114fa669") }, indexes: [], prefix: -1 }
2025-05-21T11:59:39.473+0800 D3 STORAGE  [conn1] looking up metadata for: RecordId(6)
2025-05-21T11:59:39.475+0800 D3 STORAGE  [conn1] recording new metadata: { md: { ns: "db2.conca", options: { uuid: UUID("4ce9d174-a254-442b-9d24-90fa114fa669") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_" }, ready: false, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "index-1--9135487495984222338" }, ns: "db2.conca", ident: "collection-0--9135487495984222338" }
2025-05-21T11:59:39.476+0800 D3 STORAGE  [conn1] looking up metadata for: RecordId(6)
2025-05-21T11:59:39.477+0800 D3 STORAGE  [conn1] looking up metadata for: RecordId(6)
2025-05-21T11:59:39.477+0800 D3 STORAGE  [conn1]  fetched CCE metadata: { md: { ns: "db2.conca", options: { uuid: UUID("4ce9d174-a254-442b-9d24-90fa114fa669") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_" }, ready: false, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "index-1--9135487495984222338" }, ns: "db2.conca", ident: "collection-0--9135487495984222338" }
2025-05-21T11:59:39.478+0800 D3 STORAGE  [conn1] returning metadata: md: { ns: "db2.conca", options: { uuid: UUID("4ce9d174-a254-442b-9d24-90fa114fa669") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_" }, ready: false, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }
2025-05-21T11:59:39.479+0800 D3 STORAGE  [conn1] index create string: type=file,internal_page_max=16k,leaf_page_max=16k,checksum=on,prefix_compression=true,block_compressor=,,,,key_format=u,value_format=u,app_metadata=(formatVersion=8),log=(enabled=true)
2025-05-21T11:59:39.479+0800 D2 STORAGE  [conn1] WiredTigerKVEngine::createSortedDataInterface ns: db2.conca ident: index-1--9135487495984222338 config: type=file,internal_page_max=16k,leaf_page_max=16k,checksum=on,prefix_compression=true,block_compressor=,,,,key_format=u,value_format=u,app_metadata=(formatVersion=8),log=(enabled=true)
2025-05-21T11:59:39.480+0800 D1 STORAGE  [conn1] create uri: table:index-1--9135487495984222338 config: type=file,internal_page_max=16k,leaf_page_max=16k,checksum=on,prefix_compression=true,block_compressor=,,,,key_format=u,value_format=u,app_metadata=(formatVersion=8),log=(enabled=true)
2025-05-21T11:59:39.484+0800 D3 STORAGE  [conn1] looking up metadata for: RecordId(6)
2025-05-21T11:59:39.484+0800 D3 STORAGE  [conn1] looking up metadata for: RecordId(6)
2025-05-21T11:59:39.484+0800 D3 STORAGE  [conn1]  fetched CCE metadata: { md: { ns: "db2.conca", options: { uuid: UUID("4ce9d174-a254-442b-9d24-90fa114fa669") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_" }, ready: false, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "index-1--9135487495984222338" }, ns: "db2.conca", ident: "collection-0--9135487495984222338" }
2025-05-21T11:59:39.485+0800 D3 STORAGE  [conn1] returning metadata: md: { ns: "db2.conca", options: { uuid: UUID("4ce9d174-a254-442b-9d24-90fa114fa669") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_" }, ready: false, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }
2025-05-21T11:59:39.486+0800 D3 STORAGE  [conn1] looking up metadata for: RecordId(6)
2025-05-21T11:59:39.486+0800 D3 STORAGE  [conn1]  fetched CCE metadata: { md: { ns: "db2.conca", options: { uuid: UUID("4ce9d174-a254-442b-9d24-90fa114fa669") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_" }, ready: false, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "index-1--9135487495984222338" }, ns: "db2.conca", ident: "collection-0--9135487495984222338" }
2025-05-21T11:59:39.487+0800 D3 STORAGE  [conn1] returning metadata: md: { ns: "db2.conca", options: { uuid: UUID("4ce9d174-a254-442b-9d24-90fa114fa669") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_" }, ready: false, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }
2025-05-21T11:59:39.490+0800 D3 STORAGE  [conn1] looking up metadata for: RecordId(6)
2025-05-21T11:59:39.491+0800 D3 STORAGE  [conn1]  fetched CCE metadata: { md: { ns: "db2.conca", options: { uuid: UUID("4ce9d174-a254-442b-9d24-90fa114fa669") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_" }, ready: false, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "index-1--9135487495984222338" }, ns: "db2.conca", ident: "collection-0--9135487495984222338" }
2025-05-21T11:59:39.492+0800 D3 STORAGE  [conn1] returning metadata: md: { ns: "db2.conca", options: { uuid: UUID("4ce9d174-a254-442b-9d24-90fa114fa669") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_" }, ready: false, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }
2025-05-21T11:59:39.493+0800 D2 STORAGE  [conn1] WiredTigerUtil::checkApplicationMetadataFormatVersion  uri: table:index-1--9135487495984222338 ok range 6 -> 12 current: 8
2025-05-21T11:59:39.493+0800 D1 STORAGE  [conn1] db2.conca: clearing plan cache - collection info cache reset
2025-05-21T11:59:39.494+0800 I  INDEX    [conn1] index build: done building index _id_ on ns db2.conca
2025-05-21T11:59:39.494+0800 D3 STORAGE  [conn1] looking up metadata for: RecordId(6)
2025-05-21T11:59:39.494+0800 D3 STORAGE  [conn1]  fetched CCE metadata: { md: { ns: "db2.conca", options: { uuid: UUID("4ce9d174-a254-442b-9d24-90fa114fa669") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_" }, ready: false, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "index-1--9135487495984222338" }, ns: "db2.conca", ident: "collection-0--9135487495984222338" }
2025-05-21T11:59:39.495+0800 D3 STORAGE  [conn1] returning metadata: md: { ns: "db2.conca", options: { uuid: UUID("4ce9d174-a254-442b-9d24-90fa114fa669") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_" }, ready: false, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }
2025-05-21T11:59:39.496+0800 D3 STORAGE  [conn1] looking up metadata for: RecordId(6)
2025-05-21T11:59:39.496+0800 D3 STORAGE  [conn1] recording new metadata: { md: { ns: "db2.conca", options: { uuid: UUID("4ce9d174-a254-442b-9d24-90fa114fa669") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "index-1--9135487495984222338" }, ns: "db2.conca", ident: "collection-0--9135487495984222338" }
2025-05-21T11:59:39.497+0800 D3 STORAGE  [conn1] looking up metadata for: RecordId(6)
2025-05-21T11:59:39.497+0800 D3 STORAGE  [conn1]  fetched CCE metadata: { md: { ns: "db2.conca", options: { uuid: UUID("4ce9d174-a254-442b-9d24-90fa114fa669") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "index-1--9135487495984222338" }, ns: "db2.conca", ident: "collection-0--9135487495984222338" }
2025-05-21T11:59:39.498+0800 D3 STORAGE  [conn1] returning metadata: md: { ns: "db2.conca", options: { uuid: UUID("4ce9d174-a254-442b-9d24-90fa114fa669") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }
2025-05-21T11:59:39.499+0800 D3 STORAGE  [conn1] WT commit_transaction for snapshot id 1679
2025-05-21T11:59:39.499+0800 D2 STORAGE  [conn1] CUSTOM COMMIT class mongo::WiredTigerRecordStore::NumRecordsChange
2025-05-21T11:59:39.499+0800 D2 STORAGE  [conn1] CUSTOM COMMIT class mongo::WiredTigerRecordStore::DataSizeChange
2025-05-21T11:59:39.500+0800 D2 STORAGE  [conn1] CUSTOM COMMIT class mongo::DurableCatalogImpl::AddIdentChange

本文來自互聯網用戶投稿,該文觀點僅代表作者本人,不代表本站立場。本站僅提供信息存儲空間服務,不擁有所有權,不承擔相關法律責任。
如若轉載,請注明出處:http://www.pswp.cn/diannao/93524.shtml
繁體地址,請注明出處:http://hk.pswp.cn/diannao/93524.shtml
英文地址,請注明出處:http://en.pswp.cn/diannao/93524.shtml

如若內容造成侵權/違法違規/事實不符,請聯系多彩編程網進行投訴反饋email:809451989@qq.com,一經查實,立即刪除!

相關文章

c++注意點(13)----設計模式(抽象工廠)

創建型模式抽象工廠模式&#xff08;Abstract Factory Pattern&#xff09;是一種創建型設計模式&#xff0c;它提供一個接口&#xff0c;用于創建一系列相關或相互依賴的對象&#xff0c;而無需指定它們具體的類。簡單說&#xff0c;它就像一個 "超級工廠"&#xff…

【大語言模型入門】—— Transformer 如何工作:Transformer 架構的詳細探索

Transformer 如何工作&#xff1a;Transformer 架構的詳細探索Transformer 如何工作&#xff1a;Transformer 架構的詳細探索什么是 Transformer&#xff1f;什么是 Transformer 模型&#xff1f;歷史背景從 RNN 模型&#xff08;如 LSTM&#xff09;到 Transformer 模型在 NLP…

iOS安全和逆向系列教程 第20篇:Objective-C運行時機制深度解析與Hook技術

iOS安全和逆向系列教程 第20篇:Objective-C運行時機制深度解析與Hook技術 引言 在上一篇文章中,我們深入學習了ARM64匯編語言的基礎知識,掌握了從寄存器操作到指令分析的完整技能體系。現在,我們將把這些底層知識與iOS應用的高層邏輯聯系起來,深入探討Objective-C運行時…

IDEA中全局搜索快捷鍵Ctrl+Shift+F為何失靈?探尋原因與修復指南

在軟件開發中&#xff0c;高效地查找和管理代碼是提升生產力的關鍵。IntelliJ IDEA&#xff0c;作為一款功能強大的集成開發環境&#xff08;IDE&#xff09;&#xff0c;提供了豐富的搜索功能&#xff0c;幫助開發者迅速定位代碼、資源、甚至是IDE功能本身。 在 IntelliJ IDE…

【學習筆記】Lean4 定理證明 ing

文章目錄概述Lean4 定理證明初探示例&#xff1a;證明 1 1 2示例&#xff1a;證明 2 * (x y) 2 * x 2 * yLean4 定理證明基礎命題與定理命題&#xff08;Proposition&#xff09;定理&#xff08;Theorem&#xff09;量詞策略概述 Lean證明是指在Lean環境中&#xff0c;通…

墨者:SQL注入漏洞測試(HTTP頭注入)

墨者學院&#xff1a;SQL注入漏洞測試(HTTP頭注入)&#x1f680; 1. 什么是HTTP頭注入&#xff1f;&#x1f50d; HTTP頭注入是指攻擊者通過篡改HTTP請求頭部的字段&#xff08;如User-Agent、Referer、Cookie、Host等&#xff09;&#xff0c;將惡意SQL代碼插入到后端數據庫查…

linux_前臺,后臺進程

*在用戶訪問端口時&#xff0c;操作系統會形成對應的session,在其的內部進一步形成bash等進程 *一個會話只有一個前臺進程&#xff0c;可以有多個后臺進程&#xff0c;前臺與后臺進程的區別在于誰擁有鍵盤的使用權*前臺與后臺進程都可以訪問顯示器但是后臺無法訪問標準輸入獲取…

spring data mongodb 入門使用手冊

<!--pom.xml引入依賴--><dependency><groupId>org.springframework.boot</groupId><artifactId>spring-boot-starter-data-mongodb</artifactId></dependency>文檔映射類Student.java import lombok.Data; import lombok.NoArgsCons…

Fastjson2常用操作大全:對象、字符串、集合、數組、Map與JSON互轉實戰

高性能&#xff1a; 核心解析器和生成器經過深度優化&#xff0c;性能遠超許多同類庫。 功能豐富&#xff1a; 支持標準JSON、JSONPath查詢、泛型處理、日期格式化、自定義序列化/反序列化等。 易用性&#xff1a; API 設計簡潔直觀&#xff0c;JSON 工具類提供了最常用的 toJS…

大模型——字節Coze重磅開源!Dify何去何從

大模型——字節Coze重磅開源!Dify何去何從 想必很多人盼了很久,就在昨晚,字節Coze終于開源了!Coze Studio 是字節跳動新一代 AI Agent 開發平臺扣子(Coze)的開源版本。 提供 AI Agent 開發所需的全部核心技術:Prompt、RAG、Plugin、Workflow,使得開發者可以聚焦創造 A…

NaVid——基于單目RGB捕獲的視頻讓VLM規劃「連續環境中VLN」的下一步:無需地圖/里程計/深度信息(含MP3D/R2R/RxR,及VLN-CE的詳解)

前言 因為我司「七月在線」準備于25年7月底復現下NaVILA&#xff0c;而在研究NaVILA的過程中&#xff0c;注意到了這個NaVid 雖然NaVid目前已經不是VLN sota了&#xff0c;但其首次展示了VLM在無需地圖、里程計或深度輸入的情況下&#xff0c;能夠實現優秀的導航性能且對后來…

【Vue2】結合chrome與element-ui的網頁端條碼打印

所有文章都是免費查看的&#xff0c;如果有無法查看的情況&#xff0c;煩請聯系我修改哈~ 序言 為什么要做這個呢&#xff1f;因為所需要的條碼打印功能比較簡單&#xff0c;符合需要即可&#xff0c;但是呢網上查看了發現并沒有合適的開源項&#xff0c;其他成熟的軟件收費又超…

循環神經網絡——動手學深度學習7

環境&#xff1a;PyCharm python3.8 &#x1f449;【循環神經網絡】(recurrent neural network&#xff0c;RNN) RNN通過 引入狀態變量存儲過去的信息和當前的輸入&#xff0c;從而可以確定當前的輸出。狀態變量捕捉序列的時序依賴&#xff0c;是處理文本、時間序列等數據的…

Java面試寶典:MySQL8新特性底層原理

一、降序索引的革新 1.1 降序索引的核心概念 MySQL 8.0 實現了真正的降序索引(Descending Index) 支持,這是數據庫引擎層面的重大改進: 存儲引擎支持:僅 InnoDB 存儲引擎支持降序索引,且僅適用于 B+Tree 索引結構 語法顯式聲明:通過 DESC 關鍵字顯式指定字段的排序方向…

前端-html+CSS基礎到高級(三)html基礎和開發工具

一、html語法規范什么是注釋?在同學上課學習時&#xff0c;我們會在書本段落間記錄內容的注解&#xff0c;方便下次看到此處理解。問題&#xff1a;我們在書本段落間記錄下的注解是為了給誰看的&#xff1f; 下次的閱讀課本者&#xff08;自己&#xff09;。程序員在寫代碼時也…

-Dspring.output.ansi.enabled=ALWAYS 設置彩色日志不生效

-Dspring.output.ansi.enabledALWAYS 設置彩色日志不生效 問題原因&#xff1a;使用的自定義的 logback-spring.xml日志需要改一下 <?xml version"1.0" encoding"UTF-8"?> <configuration><appender name"STDOUT" class"c…

C# 判斷語句深度解析

C# 判斷語句深度解析 引言 在編程領域,C# 是一種廣泛使用的面向對象的編程語言,常用于構建各種應用程序,從桌面到移動應用,再到網站服務。C# 的核心特性之一是其強大的控制流機制,其中包括條件判斷。本文將深入探討 C# 中的判斷語句,包括它們的類型、使用場景以及最佳實…

Ambari 3.0.0 全網首發支持 Ubuntu 22!

本月我們團隊帶來了一個重磅進展&#xff1a;Ambari 3.0.0 全網首發支持 Ubuntu 22&#xff01; 經過數月籌備和持續迭代&#xff0c;終于讓 Ambari 以及大數據基礎組件可以順利運行在 Ubuntu 22 上。 需求來源&#xff1a;用戶呼聲決定研發方向 年初有位小伙伴私信我們&#x…

Android Camera capture

序 想了下還是擠擠時間&#xff0c;把相機這基礎流程寫完吧&#xff0c;前面每篇寫的都還是挺耗時的&#xff08;就是累了&#xff0c;想偷偷懶&#xff0c;哈哈哈哈&#xff09;&#xff0c;那接著前面的幾篇文章&#xff0c;給這一些列寫上一個中規中矩的結局吧~ APP層 以下是…

落霞歸雁思維框架應用(十) ——在職考研 199 管綜 + 英語二 30 周「順水行舟」上岸指南

落霞歸雁思維框架應用&#xff08;十&#xff09; ——在職考研 199 管綜 英語二 30 周「順水行舟」上岸指南 CSDN 首發 | 作者&#xff1a;落霞歸雁 | 2025-08-01 開場&#xff1a;把 199英二 從“兩座大山”變成“兩條順流” 在職黨最怕兩句話&#xff1a; “管綜題量太大…