重庆分公司,新征程启航
为企业提供网站建设、域名注册、服务器等服务
本节介绍了PortalXXX函数,这些函数在create_simple_query中被调用,包括CreatePortal、PortalDefineQuery、PortalSetResultFormat、PortalRun和PortalDrop函数。
成都创新互联公司专注于安宁网站建设服务及定制,我们拥有丰富的企业做网站经验。 热诚为您提供安宁营销型网站建设,安宁网站制作、安宁网页设计、安宁网站官网定制、成都小程序开发服务,打造安宁网络公司原创品牌,更为您提供安宁网站排名全网营销落地服务。
Portal
包括场景PortalStrategy枚举定义/PortalStatus状态定义/PortalData结构体.Portal是PortalData结构体指针,详见代码注释.
/*
* We have several execution strategies for Portals, depending on what
* query or queries are to be executed. (Note: in all cases, a Portal
* executes just a single source-SQL query, and thus produces just a
* single result from the user's viewpoint. However, the rule rewriter
* may expand the single source query to zero or many actual queries.)
* 对于Portals(客户端请求),有几种执行策略,具体取决于要执行什么查询。
* (注意:无论什么情况下,一个Portal只执行一个source-SQL查询,因此从用户的角度来看只产生一个结果。
* 但是,规则重写器可以将单个源查询扩展为零或多个实际查询。
*
* PORTAL_ONE_SELECT: the portal contains one single SELECT query. We run
* the Executor incrementally as results are demanded. This strategy also
* supports holdable cursors (the Executor results can be dumped into a
* tuplestore for access after transaction completion).
* PORTAL_ONE_SELECT: 包含一个SELECT查询。
* 按需要的结果重复(递增)地运行执行器。
* 该策略还支持可持有游标(执行器结果可以在事务完成后转储到tuplestore中进行访问)。
*
* PORTAL_ONE_RETURNING: the portal contains a single INSERT/UPDATE/DELETE
* query with a RETURNING clause (plus possibly auxiliary queries added by
* rule rewriting). On first execution, we run the portal to completion
* and dump the primary query's results into the portal tuplestore; the
* results are then returned to the client as demanded. (We can't support
* suspension of the query partway through, because the AFTER TRIGGER code
* can't cope, and also because we don't want to risk failing to execute
* all the auxiliary queries.)
* PORTAL_ONE_RETURNING: 包含一个带有RETURNING子句的INSERT/UPDATE/DELETE查询
(可能还包括由规则重写添加的辅助查询)。
* 在第一次执行时,运行Portal来完成并将主查询的结果转储到Portal的tuplestore中;
* 然后根据需要将结果返回给客户端。
* (我们不能支持半途中断的查询,因为AFTER触发器代码无法处理,
* 也因为不想冒执行所有辅助查询失败的风险)。
*
* PORTAL_ONE_MOD_WITH: the portal contains one single SELECT query, but
* it has data-modifying CTEs. This is currently treated the same as the
* PORTAL_ONE_RETURNING case because of the possibility of needing to fire
* triggers. It may act more like PORTAL_ONE_SELECT in future.
* PORTAL_ONE_MOD_WITH: 只包含一个SELECT查询,但它具有数据修改的CTEs。
* 这与PORTAL_ONE_RETURNING的情况相同,因为可能需要触发触发器。将来它的行为可能更像PORTAL_ONE_SELECT。
*
* PORTAL_UTIL_SELECT: the portal contains a utility statement that returns
* a SELECT-like result (for example, EXPLAIN or SHOW). On first execution,
* we run the statement and dump its results into the portal tuplestore;
* the results are then returned to the client as demanded.
* PORTAL_UTIL_SELECT: 包含一个实用程序语句,该语句返回一个类似SELECT的结果(例如,EXPLAIN或SHOW)。
* 在第一次执行时,运行语句并将其结果转储到portal tuplestore;然后根据需要将结果返回给客户端。
*
* PORTAL_MULTI_QUERY: all other cases. Here, we do not support partial
* execution: the portal's queries will be run to completion on first call.
* PORTAL_MULTI_QUERY: 除上述情况外的其他情况。
* 在这里,不支持部分执行:Portal的查询语句将在第一次调用时运行到完成。
*/
typedef enum PortalStrategy
{
PORTAL_ONE_SELECT,
PORTAL_ONE_RETURNING,
PORTAL_ONE_MOD_WITH,
PORTAL_UTIL_SELECT,
PORTAL_MULTI_QUERY
} PortalStrategy;
/*
* A portal is always in one of these states. It is possible to transit
* from ACTIVE back to READY if the query is not run to completion;
* otherwise we never back up in status.
* Portal总是处于这些状态中的之一。
* 如果查询没有运行到完成,则可以从活动状态转回准备状态;否则永远不会后退。
*/
typedef enum PortalStatus
{
PORTAL_NEW, /* 刚创建;freshly created */
PORTAL_DEFINED, /* PortalDefineQuery完成;PortalDefineQuery done */
PORTAL_READY, /* PortalStart完成;PortalStart complete, can run it */
PORTAL_ACTIVE, /* Portal正在运行;portal is running (can't delete it) */
PORTAL_DONE, /* Portal已经完成;portal is finished (don't re-run it) */
PORTAL_FAILED /* Portal出现错误;portal got error (can't re-run it) */
} PortalStatus;
typedef struct PortalData *Portal;//结构体指针
typedef struct PortalData
{
/* Bookkeeping data */
const char *name; /* portal的名称;portal's name */
const char *prepStmtName; /* 已完成准备的源语句;source prepared statement (NULL if none) */
MemoryContext portalContext; /* 内存上下文;subsidiary memory for portal */
ResourceOwner resowner; /* 资源的owner;resources owned by portal */
void (*cleanup) (Portal portal); /* cleanup钩子函数;cleanup hook */
/*
* State data for remembering which subtransaction(s) the portal was
* created or used in. If the portal is held over from a previous
* transaction, both subxids are InvalidSubTransactionId. Otherwise,
* createSubid is the creating subxact and activeSubid is the last subxact
* in which we ran the portal.
* 状态数据,用于记住在哪个子事务中创建或使用Portal。
* 如果Portal是从以前的事务中持有的,那么两个subxids都应该是InvalidSubTransactionId。
* 否则,createSubid是正在创建的subxact,而activeSubid是运行Portal的最后一个subxact。
*/
SubTransactionId createSubid; /* 正在创建的subxact;the creating subxact */
SubTransactionId activeSubid; /* 活动的最后一个subxact;the last subxact with activity */
/* The query or queries the portal will execute */
//portal将会执行的查询
const char *sourceText; /* 查询的源文本;text of query (as of 8.4, never NULL) */
const char *commandTag; /* 源查询的命令tag;command tag for original query */
List *stmts; /* PlannedStmt链表;list of PlannedStmts */
CachedPlan *cplan; /* 缓存的PlannedStmts;CachedPlan, if stmts are from one */
ParamListInfo portalParams; /* 传递给查询的参数;params to pass to query */
QueryEnvironment *queryEnv; /* 查询的执行环境;environment for query */
/* Features/options */
PortalStrategy strategy; /* 场景;see above */
int cursorOptions; /* DECLARE CURSOR选项位;DECLARE CURSOR option bits */
bool run_once; /* 是否只执行一次;portal will only be run once */
/* Status data */
PortalStatus status; /* Portal的状态;see above */
bool portalPinned; /* 是否不能被清除;a pinned portal can't be dropped */
bool autoHeld; /* 是否自动从pinned到held;was automatically converted from pinned to
* held (see HoldPinnedPortals()) */
/* If not NULL, Executor is active; call ExecutorEnd eventually: */
//如不为NULL,执行器处于活动状态
QueryDesc *queryDesc; /* 执行器需要使用的信息;info needed for executor invocation */
/* If portal returns tuples, this is their tupdesc: */
//如Portal需要返回元组,这是元组的描述
TupleDesc tupDesc; /* 结果元组的描述;descriptor for result tuples */
/* and these are the format codes to use for the columns: */
//列信息的格式码
int16 *formats; /* 每一列的格式码;a format code for each column */
/*
* Where we store tuples for a held cursor or a PORTAL_ONE_RETURNING or
* PORTAL_UTIL_SELECT query. (A cursor held past the end of its
* transaction no longer has any active executor state.)
* 在这里,为持有的游标或PORTAL_ONE_RETURNING或PORTAL_UTIL_SELECT存储元组。
* (在事务结束后持有的游标不再具有任何活动执行器状态。)
*/
Tuplestorestate *holdStore; /* 存储持有的游标信息;store for holdable cursors */
MemoryContext holdContext; /* 持有holdStore的内存上下文;memory containing holdStore */
/*
* Snapshot under which tuples in the holdStore were read. We must keep a
* reference to this snapshot if there is any possibility that the tuples
* contain TOAST references, because releasing the snapshot could allow
* recently-dead rows to be vacuumed away, along with any toast data
* belonging to them. In the case of a held cursor, we avoid needing to
* keep such a snapshot by forcibly detoasting the data.
* 读取holdStore中元组的Snapshot。
* 如果元组包含TOAST引用的可能性存在,那么必须保持对该快照的引用,
* 因为释放快照可能会使最近废弃的行与属于它们的TOAST数据一起被清除。
* 对于持有的游标,通过强制解压数据来避免需要保留这样的快照。
*/
Snapshot holdSnapshot; /* 已注册的快照信息,如无则为NULL;registered snapshot, or NULL if none */
/*
* atStart, atEnd and portalPos indicate the current cursor position.
* portalPos is zero before the first row, N after fetching N'th row of
* query. After we run off the end, portalPos = # of rows in query, and
* atEnd is true. Note that atStart implies portalPos == 0, but not the
* reverse: we might have backed up only as far as the first row, not to
* the start. Also note that various code inspects atStart and atEnd, but
* only the portal movement routines should touch portalPos.
* atStart、atEnd和portalPos表示当前光标的位置。
* portalPos在第一行之前为0,在获取第N行查询后为N。
* 在运行结束后,portalPos = #查询中的行号,atEnd为T。
* 注意,atStart表示portalPos == 0,但不是相反:我们可能只回到到第一行,而不是开始。
* 还要注意,各种代码在开始和结束时都要检查,但是只有Portal移动例程应该访问portalPos。
*/
bool atStart;//处于开始位置?
bool atEnd;//处于结束位置?
uint64 portalPos;//实际行号
/* Presentation data, primarily used by the pg_cursors system view */
//用于表示的数据,主要由pg_cursors系统视图使用
TimestampTz creation_time; /* portal定义的时间;time at which this portal was defined */
bool visible; /* 是否在pg_cursors中可见? include this portal in pg_cursors? */
} PortalData;
/*
* PortalIsValid
* True iff portal is valid.
* 判断Portal是否有效
*/
#define PortalIsValid(p) PointerIsValid(p)
CreatePortal
CreatePortal函数创建给定名称的Portal结构.
//------------------------------------------------------ CreatePortal
/*
* CreatePortal
* Returns a new portal given a name.
* 创建给定名称的Portal结构
*
* allowDup: if true, automatically drop any pre-existing portal of the
* same name (if false, an error is raised).
* allowDup:如为true,则自动清除已存在的同名portal,如为F,则报错
*
* dupSilent: if true, don't even emit a WARNING.
* dupSilent:如为T,不提示警告
*/
Portal
CreatePortal(const char *name, bool allowDup, bool dupSilent)
{
Portal portal;
AssertArg(PointerIsValid(name));
//根据给定的名称获取portal
portal = GetPortalByName(name);
if (PortalIsValid(portal))
{
//如portal有效
if (!allowDup)//不允许同名
ereport(ERROR,
(errcode(ERRCODE_DUPLICATE_CURSOR),
errmsg("cursor \"%s\" already exists", name)));
if (!dupSilent)//是否静默警告信息
ereport(WARNING,
(errcode(ERRCODE_DUPLICATE_CURSOR),
errmsg("closing existing cursor \"%s\"",
name)));
PortalDrop(portal, false);
}
/* make new portal structure */
//创建新的portal结构
portal = (Portal) MemoryContextAllocZero(TopPortalContext, sizeof *portal);
/* initialize portal context; typically it won't store much */
//初始化portal上下文,仅仅只是结构体,不存在信息
portal->portalContext = AllocSetContextCreate(TopPortalContext,
"PortalContext",
ALLOCSET_SMALL_SIZES);
/* create a resource owner for the portal */
//创建resource owner
portal->resowner = ResourceOwnerCreate(CurTransactionResourceOwner,
"Portal");
/* initialize portal fields that don't start off zero */
//初始化portal中的域
portal->status = PORTAL_NEW;//状态
portal->cleanup = PortalCleanup;//默认的cleanup函数
portal->createSubid = GetCurrentSubTransactionId();//正在创建的subxact
portal->activeSubid = portal->createSubid;//与createSubid一致
portal->strategy = PORTAL_MULTI_QUERY;//场景为PORTAL_MULTI_QUERY
portal->cursorOptions = CURSOR_OPT_NO_SCROLL;//默认为不允许滚动的游标
portal->atStart = true;//处于开始
portal->atEnd = true; /* 默认不允许获取数据;disallow fetches until query is set */
portal->visible = true;//在pg_cursors中可见
portal->creation_time = GetCurrentStatementStartTimestamp();//创建时间
/* put portal in table (sets portal->name) */
PortalHashTableInsert(portal, name);//放在HashTable中
/* reuse portal->name copy */
MemoryContextSetIdentifier(portal->portalContext, portal->name);//设置内存上下文标识
return portal;//返回portal结构体
}
PortalDefineQuery
PortalDefineQuery是构建portal's query信息的一个简单过程.
//------------------------------------------------------ PortalDefineQuery
/*
* PortalDefineQuery
* A simple subroutine to establish a portal's query.
* 构建portal's query的一个简单过程.
*
* Notes: as of PG 8.4, caller MUST supply a sourceText string; it is not
* allowed anymore to pass NULL. (If you really don't have source text,
* you can pass a constant string, perhaps "(query not available)".)
* 注意:如为PG 8.4,调用者必须提供源文本,不允许为NULL.
* 如果没有源文本,可以传递常量字符串,比如"(query not available)"
*
* commandTag shall be NULL if and only if the original query string
* (before rewriting) was an empty string. Also, the passed commandTag must
* be a pointer to a constant string, since it is not copied.
* commandTag只有在原始查询字符串(重写之前)为空字符串时才为空。
* 另外,传递的commandTag必须是一个指向常量字符串的指针,因为它不会被复制。
*
* If cplan is provided, then it is a cached plan containing the stmts, and
* the caller must have done GetCachedPlan(), causing a refcount increment.
* The refcount will be released when the portal is destroyed.
* 如果cplan不为NULL,那么它就是一个包含stmts的缓存计划,调用者必须执行GetCachedPlan(),这会导致refcount的增加。
* 当门户被销毁时,refcount将被释放。
*
* If cplan is NULL, then it is the caller's responsibility to ensure that
* the passed plan trees have adequate lifetime. Typically this is done by
* copying them into the portal's context.
* 如果cplan为空,那么调用方有责任确保传递的计划树具有足够长的生命周期。
* 通常,这是通过将它们复制到Portal的上下文中来完成的。
*
* The caller is also responsible for ensuring that the passed prepStmtName
* (if not NULL) and sourceText have adequate lifetime.
* 调用方同样有责任确保传递的参数prepStmtName(如为NOT NULL)和sourceText有足够长的生命期
*
* NB: this function mustn't do much beyond storing the passed values; in
* particular don't do anything that risks elog(ERROR). If that were to
* happen here before storing the cplan reference, we'd leak the plancache
* refcount that the caller is trying to hand off to us.
* 注意:这个函数除了存储传递的值之外不会做什么,特别是不做任何有可能出错的事情。
* 如果在存储cplan引用之前发生这种情况,会泄漏调用者试图传递给我们的plancache refcount。
*/
void
PortalDefineQuery(Portal portal,
const char *prepStmtName,
const char *sourceText,
const char *commandTag,
List *stmts,
CachedPlan *cplan)
{
AssertArg(PortalIsValid(portal));
AssertState(portal->status == PORTAL_NEW);
AssertArg(sourceText != NULL);
AssertArg(commandTag != NULL || stmts == NIL);
//仅用于传递参数,给portal结构赋值
portal->prepStmtName = prepStmtName;
portal->sourceText = sourceText;
portal->commandTag = commandTag;
portal->stmts = stmts;
portal->cplan = cplan;
portal->status = PORTAL_DEFINED;//设置状态为PORTAL_DEFINED
}
PortalSetResultFormat
PortalSetResultFormat函数为portal的输出选择格式化码.
//------------------------------------------------------ PortalSetResultFormat
/*
* PortalSetResultFormat
* Select the format codes for a portal's output.
* 为portal的输出选择格式化码.
*
* This must be run after PortalStart for a portal that will be read by
* a DestRemote or DestRemoteExecute destination. It is not presently needed
* for other destination types.
* 这必须在PortalStart调用之后运行,portal结构体将由DestRemote或DestRemoteExecute的目标地读取。
* 其他目标类型目前不需要它。
*
* formats[] is the client format request, as per Bind message conventions.
* formats[]是客户端的格式化请求,按照绑定的消息约定.
*/
void
PortalSetResultFormat(Portal portal, int nFormats, int16 *formats)
{
int natts;
int i;
/* Do nothing if portal won't return tuples */
//如portal不返回元组,则直接返回
if (portal->tupDesc == NULL)
return;
natts = portal->tupDesc->natts;
portal->formats = (int16 *)
MemoryContextAlloc(portal->portalContext,
natts * sizeof(int16));
if (nFormats > 1)
{
/* format specified for each column */
//对每一列进行格式化
if (nFormats != natts)
ereport(ERROR,
(errcode(ERRCODE_PROTOCOL_VIOLATION),
errmsg("bind message has %d result formats but query has %d columns",
nFormats, natts)));
memcpy(portal->formats, formats, natts * sizeof(int16));
}
else if (nFormats > 0)
{
/* single format specified, use for all columns */
//指定格式,用于所有列
int16 format1 = formats[0];
for (i = 0; i < natts; i++)
portal->formats[i] = format1;
}
else
{
/* use default format for all columns */
//所有列使用默认的格式
for (i = 0; i < natts; i++)
portal->formats[i] = 0;
}
}
PortalRun
PortalRun执行portal单个查询或多个查询
//------------------------------------------------------ PortalRun
/*
* PortalRun
* Run a portal's query or queries.
* 执行portal查询或多个查询
*
* count <= 0 is interpreted as a no-op: the destination gets started up
* and shut down, but nothing else happens. Also, count == FETCH_ALL is
* interpreted as "all rows". Note that count is ignored in multi-query
* situations, where we always run the portal to completion.
* count <= 0被解释为no-op:目标启动并关闭,但是没有发生其他事情。
* count == FETCH_ALL被解释为“所有行”。
* 注意,在多个查询的情况下,count被忽略,在这种情况下,我们总是运行portal直到完成。
*
* isTopLevel: true if query is being executed at backend "top level"
* (that is, directly from a client command message)
* isTopLevel: T如果查询将在后台"top level"执行
*
* dest: where to send output of primary (canSetTag) query
* dest: 主查询(canSetTag)将输出到哪里
*
* altdest: where to send output of non-primary queries
* altdest:非主查询(non-primary)将输出到哪里
*
* completionTag: points to a buffer of size COMPLETION_TAG_BUFSIZE
* in which to store a command completion status string.
* May be NULL if caller doesn't want a status string.
* completionTag:指向一个大小为COMPLETION_TAG_BUFSIZE的缓冲区,其中存储一个命令完成状态字符串。
* 如果调用者不想要状态字符串,则可能为空。
*
* Returns true if the portal's execution is complete, false if it was
* suspended due to exhaustion of the count parameter.
* 如portal执行完成,返回T,否则如果由于计数参数耗尽而暂停,则为false
*/
bool
PortalRun(Portal portal, long count, bool isTopLevel, bool run_once,
DestReceiver *dest, DestReceiver *altdest,
char *completionTag)
{
bool result;
uint64 nprocessed;
ResourceOwner saveTopTransactionResourceOwner;
MemoryContext saveTopTransactionContext;
Portal saveActivePortal;
ResourceOwner saveResourceOwner;
MemoryContext savePortalContext;
MemoryContext saveMemoryContext;
AssertArg(PortalIsValid(portal));
TRACE_POSTGRESQL_QUERY_EXECUTE_START();
/* Initialize completion tag to empty string */
//初始化completionTag为空串
if (completionTag)
completionTag[0] = '\0';
if (log_executor_stats && portal->strategy != PORTAL_MULTI_QUERY)
{
elog(DEBUG3, "PortalRun");
/* PORTAL_MULTI_QUERY logs its own stats per query */
ResetUsage();
}
/*
* Check for improper portal use, and mark portal active.
* 检查portal是否使用得当,如OK则标记为活动。
*/
MarkPortalActive(portal);
/* Set run_once flag. Shouldn't be clear if previously set. */
//设置run_once标记,如果先前已设置,则不要清除此标记
Assert(!portal->run_once || run_once);
portal->run_once = run_once;
/*
* Set up global portal context pointers.
* 设置全局portal上下文指针
*
* We have to play a special game here to support utility commands like
* VACUUM and CLUSTER, which internally start and commit transactions.
* When we are called to execute such a command, CurrentResourceOwner will
* be pointing to the TopTransactionResourceOwner --- which will be
* destroyed and replaced in the course of the internal commit and
* restart. So we need to be prepared to restore it as pointing to the
* exit-time TopTransactionResourceOwner. (Ain't that ugly? This idea of
* internally starting whole new transactions is not good.)
* CurrentMemoryContext has a similar problem, but the other pointers we
* save here will be NULL or pointing to longer-lived objects.
* 我们必须在这里玩一个特殊的"把戏"来支持像VACUUM和CLUSTER这样的实用命令,它们在内部启动和提交事务。
* 当被调用执行这样的命令时,CurrentResourceOwner将指向
* TopTransactionResourceOwner——它将在内部提交和重新启动的过程中被销毁和替换。
* 因此,我们需要准备将其恢复为指向exit-time TopTransactionResourceOwner。
* (这样的做法很丑陋吧?这种内部启动全新事务的想法其实是不好的。)
* CurrentMemoryContext也有类似的问题,但是在这里保存的其他指针将为NULL,或者指向生命周期更长的对象。
*/
//保存"现场"
saveTopTransactionResourceOwner = TopTransactionResourceOwner;
saveTopTransactionContext = TopTransactionContext;
saveActivePortal = ActivePortal;
saveResourceOwner = CurrentResourceOwner;
savePortalContext = PortalContext;
saveMemoryContext = CurrentMemoryContext;
PG_TRY();
{
ActivePortal = portal;
if (portal->resowner)
CurrentResourceOwner = portal->resowner;
PortalContext = portal->portalContext;
MemoryContextSwitchTo(PortalContext);
switch (portal->strategy)//根据场景执行不同的逻辑
{
case PORTAL_ONE_SELECT:
case PORTAL_ONE_RETURNING:
case PORTAL_ONE_MOD_WITH:
case PORTAL_UTIL_SELECT:
/*
* If we have not yet run the command, do so, storing its
* results in the portal's tuplestore. But we don't do that
* for the PORTAL_ONE_SELECT case.
* 如果还没有运行该命令,那么就将其结果存储在portal的tuplestore中。
* 但对于PORTAL_ONE_SELECT,则不会这样做。
*/
if (portal->strategy != PORTAL_ONE_SELECT && !portal->holdStore)
FillPortalStore(portal, isTopLevel);
/*
* Now fetch desired portion of results.
* 现在开始获取所需的部分结果-->执行PortalRunSelect。
*/
nprocessed = PortalRunSelect(portal, true, count, dest);
/*
* If the portal result contains a command tag and the caller
* gave us a pointer to store it, copy it. Patch the "SELECT"
* tag to also provide the rowcount.
* 如果portal结果包含一个命令标记,调用者将给我们一个指针来存储,需要复制此标记。
* 修补“SELECT”标签以提供行数。
*/
if (completionTag && portal->commandTag)
{
if (strcmp(portal->commandTag, "SELECT") == 0)
snprintf(completionTag, COMPLETION_TAG_BUFSIZE,
"SELECT " UINT64_FORMAT, nprocessed);
else
strcpy(completionTag, portal->commandTag);
}
/* Mark portal not active */
//标记portal为PORTAL_READY
portal->status = PORTAL_READY;
/*
* Since it's a forward fetch, say DONE iff atEnd is now true.
* 由于这是前向获取,设置result为atEnd
*/
result = portal->atEnd;
break;
case PORTAL_MULTI_QUERY:
PortalRunMulti(portal, isTopLevel, false,
dest, altdest, completionTag);
/* Prevent portal's commands from being re-executed */
//防止portal命令重复执行
MarkPortalDone(portal);
/* Always complete at end of RunMulti */
//在RunMulti最后设置result为T
result = true;
break;
default://错误的场景
elog(ERROR, "unrecognized portal strategy: %d",
(int) portal->strategy);
result = false; /* 让编译器"闭嘴";keep compiler quiet */
break;
}
}
PG_CATCH();
{
/* Uncaught error while executing portal: mark it dead */
//未捕获的错误,设置portal状态为dead
MarkPortalFailed(portal);
/* Restore global vars and propagate error */
//恢复全局的vars并抛出错误
if (saveMemoryContext == saveTopTransactionContext)
MemoryContextSwitchTo(TopTransactionContext);
else
MemoryContextSwitchTo(saveMemoryContext);
ActivePortal = saveActivePortal;
if (saveResourceOwner == saveTopTransactionResourceOwner)
CurrentResourceOwner = TopTransactionResourceOwner;
else
CurrentResourceOwner = saveResourceOwner;
PortalContext = savePortalContext;
PG_RE_THROW();
}
PG_END_TRY();
if (saveMemoryContext == saveTopTransactionContext)
MemoryContextSwitchTo(TopTransactionContext);
else
MemoryContextSwitchTo(saveMemoryContext);
ActivePortal = saveActivePortal;
if (saveResourceOwner == saveTopTransactionResourceOwner)
CurrentResourceOwner = TopTransactionResourceOwner;
else
CurrentResourceOwner = saveResourceOwner;
PortalContext = savePortalContext;
if (log_executor_stats && portal->strategy != PORTAL_MULTI_QUERY)
ShowUsage("EXECUTOR STATISTICS");
TRACE_POSTGRESQL_QUERY_EXECUTE_DONE();
return result;
}
PortalDrop
PortalDrop函数销毁portal结构体
//------------------------------------------------------ PortalDrop
/*
* PortalDrop
* Destroy the portal.
* 销毁portal结构体
*/
void
PortalDrop(Portal portal, bool isTopCommit)
{
AssertArg(PortalIsValid(portal));
/*
* Don't allow dropping a pinned portal, it's still needed by whoever
* pinned it.
* 不允许清除pinned portal,在某些地方还需要
*/
if (portal->portalPinned)
ereport(ERROR,
(errcode(ERRCODE_INVALID_CURSOR_STATE),
errmsg("cannot drop pinned portal \"%s\"", portal->name)));
/*
* Not sure if the PORTAL_ACTIVE case can validly happen or not...
* 不确定PORTAL_ACTIVE这种场景是否能有效发生…
*/
if (portal->status == PORTAL_ACTIVE)
ereport(ERROR,
(errcode(ERRCODE_INVALID_CURSOR_STATE),
errmsg("cannot drop active portal \"%s\"", portal->name)));
/*
* Allow portalcmds.c to clean up the state it knows about, in particular
* shutting down the executor if still active. This step potentially runs
* user-defined code so failure has to be expected. It's the cleanup
* hook's responsibility to not try to do that more than once, in the case
* that failure occurs and then we come back to drop the portal again
* during transaction abort.
* 允许portalcmds.c清理相关状态,特别是关闭执行器(如果执行器仍然活跃)。
* 这个步骤可能运行用户自定义的代码,因此必须预料到会可能出现故障。
* 在发生故障时,清理钩子的责任是不要尝试多次这样做,然后在事务中止期间再次删除portal。
*
* Note: in most paths of control, this will have been done already in
* MarkPortalDone or MarkPortalFailed. We're just making sure.
* 注意:在大多数控制路径中,这将在MarkPortalDone或MarkPortalFailed中完成。但需要确认。
*
*/
if (PointerIsValid(portal->cleanup))
{
portal->cleanup(portal);
portal->cleanup = NULL;
}
/*
* Remove portal from hash table. Because we do this here, we will not
* come back to try to remove the portal again if there's any error in the
* subsequent steps. Better to leak a little memory than to get into an
* infinite error-recovery loop.
* 从哈希表中删除portal。
* 因为在这里这样做,所以如果后续步骤中出现任何错误,将不再试图再次删除portal。
* 泄漏一点内存总比陷入无限的错误恢复循环要好。
*/
PortalHashTableDelete(portal);
/* drop cached plan reference, if any */
//清除已缓存的plan引用
PortalReleaseCachedPlan(portal);
/*
* If portal has a snapshot protecting its data, release that. This needs
* a little care since the registration will be attached to the portal's
* resowner; if the portal failed, we will already have released the
* resowner (and the snapshot) during transaction abort.
* 如果portal有一个保护其数据的快照,那么释放它。
* 这需要注意一点,因为注册器将附加到portal的resowner;
* 如果portal执行失败,将在事务中止期间释放resowner(和快照)。
*/
if (portal->holdSnapshot)
{
if (portal->resowner)
UnregisterSnapshotFromOwner(portal->holdSnapshot,
portal->resowner);
portal->holdSnapshot = NULL;
}
/*
* Release any resources still attached to the portal. There are several
* cases being covered here:
* 释放仍附加到portal的所有资源。这里涉及几种情况:
*
* Top transaction commit (indicated by isTopCommit): normally we should
* do nothing here and let the regular end-of-transaction resource
* releasing mechanism handle these resources too. However, if we have a
* FAILED portal (eg, a cursor that got an error), we'd better clean up
* its resources to avoid resource-leakage warning messages.
* Top事务提交(由isTopCommit表示):通常在这里什么也不做,让常规的事务结束资源释放机制也处理这些资源。
* 但是,如果有一个失败的portal(例如,游标出错),那么最好清理它的资源,以避免资源泄漏警告消息。
*
* Sub transaction commit: never comes here at all, since we don't kill
* any portals in AtSubCommit_Portals().
* 子事务提交:永远不会出现在这里,因为不会杀死atsubcommit_ports()中的任何portal。
*
* Main or sub transaction abort: we will do nothing here because
* portal->resowner was already set NULL; the resources were already
* cleaned up in transaction abort.
* 主事务或子事务中止:什么也不做,因为portal->resowner已经设置为NULL;事务中止中已经清理了资源。
*
* Ordinary portal drop: must release resources. However, if the portal
* is not FAILED then we do not release its locks. The locks become the
* responsibility of the transaction's ResourceOwner (since it is the
* parent of the portal's owner) and will be released when the transaction
* eventually ends.
* 普通portal清除:必须释放资源。
* 然而,如果portal没有失败,那么不会释放它的锁。
* 锁由事务的ResourceOwner负责(因为它是portal所有者的父类),并在事务最终结束时被释放。
*/
if (portal->resowner &&
(!isTopCommit || portal->status == PORTAL_FAILED))
{
bool isCommit = (portal->status != PORTAL_FAILED);
ResourceOwnerRelease(portal->resowner,
RESOURCE_RELEASE_BEFORE_LOCKS,
isCommit, false);
ResourceOwnerRelease(portal->resowner,
RESOURCE_RELEASE_LOCKS,
isCommit, false);
ResourceOwnerRelease(portal->resowner,
RESOURCE_RELEASE_AFTER_LOCKS,
isCommit, false);
ResourceOwnerDelete(portal->resowner);
}
portal->resowner = NULL;
/*
* Delete tuplestore if present. We should do this even under error
* conditions; since the tuplestore would have been using cross-
* transaction storage, its temp files need to be explicitly deleted.
* 如果存在,删除tuplestore。
* 即使在错误的情况下,也应该这样做;由于tuplestore将一直使用跨事务存储,因此需要显式删除其临时文件。
*/
if (portal->holdStore)
{
MemoryContext oldcontext;
oldcontext = MemoryContextSwitchTo(portal->holdContext);
tuplestore_end(portal->holdStore);
MemoryContextSwitchTo(oldcontext);
portal->holdStore = NULL;
}
/* delete tuplestore storage, if any */
//删除tuplestore存储
if (portal->holdContext)
MemoryContextDelete(portal->holdContext);
/* release subsidiary storage */
//释放portalContext存储
MemoryContextDelete(portal->portalContext);
/* release portal struct (it's in TopPortalContext) */
//释放portal结构体(在TopPortalContext中)
pfree(portal);
}
测试脚本如下
testdb=# explain select dw.*,grjf.grbh,grjf.xm,grjf.ny,grjf.je
testdb-# from t_dwxx dw,lateral (select gr.grbh,gr.xm,jf.ny,jf.je
testdb(# from t_grxx gr inner join t_jfxx jf
testdb(# on gr.dwbh = dw.dwbh
testdb(# and gr.grbh = jf.grbh) grjf
testdb-# order by dw.dwbh;
QUERY PLAN
------------------------------------------------------------------------------------------
Sort (cost=20070.93..20320.93 rows=100000 width=47)
Sort Key: dw.dwbh
-> Hash Join (cost=3754.00..8689.61 rows=100000 width=47)
Hash Cond: ((gr.dwbh)::text = (dw.dwbh)::text)
-> Hash Join (cost=3465.00..8138.00 rows=100000 width=31)
Hash Cond: ((jf.grbh)::text = (gr.grbh)::text)
-> Seq Scan on t_jfxx jf (cost=0.00..1637.00 rows=100000 width=20)
-> Hash (cost=1726.00..1726.00 rows=100000 width=16)
-> Seq Scan on t_grxx gr (cost=0.00..1726.00 rows=100000 width=16)
-> Hash (cost=164.00..164.00 rows=10000 width=20)
-> Seq Scan on t_dwxx dw (cost=0.00..164.00 rows=10000 width=20)
(11 rows)
启动gdb,设置断点,进入exec_simple_query
(gdb) b exec_simple_query
Breakpoint 1 at 0x8c59af: file postgres.c, line 893.
(gdb) c
Continuing.
Breakpoint 1, exec_simple_query (
query_string=0x2a9eeb8 "select dw.*,grjf.grbh,grjf.xm,grjf.ny,grjf.je \nfrom t_dwxx dw,lateral (select gr.grbh,gr.xm,jf.ny,jf.je \n", ' ' , "from t_grxx gr inner join t_jfxx jf \n", ' ' ...) at postgres.c:893
893 CommandDest dest = whereToSendOutput;
(gdb)
进入CreatePortal
1058 CHECK_FOR_INTERRUPTS();
(gdb)
1064 portal = CreatePortal("", true, true);
(gdb) step
CreatePortal (name=0xc5b7d8 "", allowDup=true, dupSilent=true) at portalmem.c:179
179 AssertArg(PointerIsValid(name));
CreatePortal-->设置portal的相关信息
216 portal->atEnd = true; /* disallow fetches until query is set */
(gdb)
217 portal->visible = true;
(gdb)
218 portal->creation_time = GetCurrentStatementStartTimestamp();
(gdb)
221 PortalHashTableInsert(portal, name);
(gdb)
224 MemoryContextSetIdentifier(portal->portalContext, portal->name);
(gdb)
226 return portal;
CreatePortal-->查看portal结构体
(gdb) p *portal
$1 = {name = 0x2b07e90 "", prepStmtName = 0x0, portalContext = 0x2b8b7a0, resowner = 0x2acfe80,
cleanup = 0x6711b6 , createSubid = 1, activeSubid = 1, sourceText = 0x0, commandTag = 0x0, stmts = 0x0,
cplan = 0x0, portalParams = 0x0, queryEnv = 0x0, strategy = PORTAL_MULTI_QUERY, cursorOptions = 4, run_once = false,
status = PORTAL_NEW, portalPinned = false, autoHeld = false, queryDesc = 0x0, tupDesc = 0x0, formats = 0x0,
holdStore = 0x0, holdContext = 0x0, holdSnapshot = 0x0, atStart = true, atEnd = true, portalPos = 0,
creation_time = 595049454962775, visible = true}
回到exec_simple_query
(gdb)
exec_simple_query (
query_string=0x2a9eeb8 "select dw.*,grjf.grbh,grjf.xm,grjf.ny,grjf.je \nfrom t_dwxx dw,lateral (select gr.grbh,gr.xm,jf.ny,jf.je \n", ' ' , "from t_grxx gr inner join t_jfxx jf \n", ' ' ...) at postgres.c:1066
1066 portal->visible = false;
进入PortalDefineQuery
(gdb)
1073 PortalDefineQuery(portal,
(gdb) step
PortalDefineQuery (portal=0x2b04468, prepStmtName=0x0,
sourceText=0x2a9eeb8 "select dw.*,grjf.grbh,grjf.xm,grjf.ny,grjf.je \nfrom t_dwxx dw,lateral (select gr.grbh,gr.xm,jf.ny,jf.je \n", ' ' , "from t_grxx gr inner join t_jfxx jf \n", ' ' ...,
commandTag=0xc5eed5 "SELECT", stmts=0x2b86800, cplan=0x0) at portalmem.c:288
288 AssertArg(PortalIsValid(portal));
PortalDefineQuery-->设置相关参数
294 portal->prepStmtName = prepStmtName;
(gdb)
295 portal->sourceText = sourceText;
(gdb)
296 portal->commandTag = commandTag;
(gdb)
297 portal->stmts = stmts;
(gdb)
298 portal->cplan = cplan;
(gdb)
299 portal->status = PORTAL_DEFINED;
(gdb)
300 }
PortalDefineQuery-->查看portal结构体
(gdb) p *portal
$2 = {name = 0x2b07e90 "", prepStmtName = 0x0, portalContext = 0x2b8b7a0, resowner = 0x2acfe80,
cleanup = 0x6711b6 , createSubid = 1, activeSubid = 1,
sourceText = 0x2a9eeb8 "select dw.*,grjf.grbh,grjf.xm,grjf.ny,grjf.je \nfrom t_dwxx dw,lateral (select gr.grbh,gr.xm,jf.ny,jf.je \n", ' ' , "from t_grxx gr inner join t_jfxx jf \n", ' ' ...,
commandTag = 0xc5eed5 "SELECT", stmts = 0x2b86800, cplan = 0x0, portalParams = 0x0, queryEnv = 0x0,
strategy = PORTAL_MULTI_QUERY, cursorOptions = 4, run_once = false, status = PORTAL_DEFINED, portalPinned = false,
autoHeld = false, queryDesc = 0x0, tupDesc = 0x0, formats = 0x0, holdStore = 0x0, holdContext = 0x0, holdSnapshot = 0x0,
atStart = true, atEnd = true, portalPos = 0, creation_time = 595049454962775, visible = false}
回到exec_simple_query,进入PortalSetResultFormat
(gdb)
1105 PortalSetResultFormat(portal, 1, &format);
(gdb) step
PortalSetResultFormat (portal=0x2b04468, nFormats=1, formats=0x7ffff7153cbe) at pquery.c:633
633 if (portal->tupDesc == NULL)
PortalSetResultFormat-->需返回元组,nFormats为1
...
(gdb) p *portal->tupDesc
$4 = {natts = 7, tdtypeid = 2249, tdtypmod = -1, tdhasoid = false, tdrefcount = -1, constr = 0x0, attrs = 0x2b989c8}
(gdb)
(gdb) p nFormats
$5 = 1
PortalSetResultFormat-->格式码为0
(gdb) p *portal->formats
$7 = 0
回到exec_simple_query,进入PortalRun
(gdb)
1122 (void) PortalRun(portal,
(gdb) step
PortalRun (portal=0x2b04468, count=9223372036854775807, isTopLevel=true, run_once=true, dest=0x2b86838, altdest=0x2b86838,
completionTag=0x7ffff7153c70 ":\001") at pquery.c:702
702 AssertArg(PortalIsValid(portal));
PortalRun-->初始化completionTag为空串
707 if (completionTag)
(gdb)
708 completionTag[0] = '\0';
(gdb) p *completionTag
$12 = 0 '\000'
PortalRun-->设置状态为active等
(gdb) p portal->status
$15 = PORTAL_ACTIVE
(gdb) p portal->run_once
$16 = true
PortalRun-->保护"现场"
(gdb) n
741 saveTopTransactionContext = TopTransactionContext;
(gdb)
742 saveActivePortal = ActivePortal;
(gdb)
743 saveResourceOwner = CurrentResourceOwner;
(gdb)
744 savePortalContext = PortalContext;
(gdb)
745 saveMemoryContext = CurrentMemoryContext;
PortalRun-->开始执行
(gdb)
746 PG_TRY();
PortalRun-->根据场景调用相应的函数,在这里是PortalRunSelect
...
(gdb)
755 switch (portal->strategy)
(gdb)
767 if (portal->strategy != PORTAL_ONE_SELECT && !portal->holdStore)
(gdb) n
773 nprocessed = PortalRunSelect(portal, true, count, dest);
PortalRun-->处理行数的计数
(gdb) p nprocessed
$17 = 99991
设置命令完成标记
(gdb) n
782 if (strcmp(portal->commandTag, "SELECT") == 0)
(gdb)
783 snprintf(completionTag, COMPLETION_TAG_BUFSIZE,
设置portal状态为PORTAL_READY,结果为T
(gdb)
790 portal->status = PORTAL_READY;
(gdb) p portal->status
$18 = PORTAL_ACTIVE
(gdb) n
795 result = portal->atEnd;
(gdb)
796 break;
(gdb) p result
$19 = true
恢复"现场",返回结果
...
846 PortalContext = savePortalContext;
(gdb)
848 if (log_executor_stats && portal->strategy != PORTAL_MULTI_QUERY)
(gdb)
851 TRACE_POSTGRESQL_QUERY_EXECUTE_DONE();
(gdb)
853 return result;
(gdb)
854 }
回到exec_simple_query,进入PortalDrop
(gdb) n
exec_simple_query (
query_string=0x2a9eeb8 "select dw.*,grjf.grbh,grjf.xm,grjf.ny,grjf.je \nfrom t_dwxx dw,lateral (select gr.grbh,gr.xm,jf.ny,jf.je \n", ' ' , "from t_grxx gr inner join t_jfxx jf \n", ' ' ...) at postgres.c:1130
1130 receiver->rDestroy(receiver);
(gdb)
1132 PortalDrop(portal, false);
PortalDrop-->释放资源
...
(gdb)
589 MemoryContextDelete(portal->portalContext);
(gdb)
592 pfree(portal);
(gdb)
593 }
DONE!
postgres.c
PG Document:Query Planning