예배 규칙서 찾다
前言 何为PostgreSQL? PostgreSQL简史 格式约定 更多信息 臭虫汇报指导 I. 教程 章1. 从头开始 1.1. 安装 1.2. 体系基本概念 1.3. 创建一个数据库 1.4. 访问数据库 章2. SQL语言 2.1. 介绍 2.2. 概念 2.3. 创建新表 2.4. 向表中添加行 2.5. 查询一个表 2.6. 表间链接 2.7. 聚集函数 2.8. 更新 2.9. 删除 章3. 高级特性 3.1. 介绍 3.2. 视图 3.3. 外键 3.4. 事务 3.5. 窗口函数 3.6. 继承 3.7. 结论 II. SQL语言 章4. SQL语法 4.1. 词法结构 4.2. 值表达式 4.3. 调用函数 章5. 数据定义 5.1. 表的基本概念 5.2. 缺省值 5.3. 约束 5.4. 系统字段 5.5. 修改表 5.6. 权限 5.7. 模式 5.8. 继承 5.9. 分区 5.10. 其它数据库对象 5.11. 依赖性跟踪 章 6. 数据操作 6.1. 插入数据 6.2. 更新数据 6.3. 删除数据 章7. 查询 7.1. 概述 7.2. 表表达式 7.3. 选择列表 7.4. 组合查询 7.5. 行排序 7.6. LIMIT和OFFSET 7.7. VALUES列表 7.8. WITH的查询(公用表表达式) 章8. 数据类型 8.1. 数值类型 8.2. 货币类型 8.3. 字符类型 8.4. 二进制数据类型 8.5. 日期/时间类型 8.6. 布尔类型 8.7. 枚举类型 8.8. 几何类型 8.9. 网络地址类型 8.10. 位串类型 8.11. 文本搜索类型 8.12. UUID类型 8.13. XML类型 8.14. 数组 8.15. 复合类型 8.16. 对象标识符类型 8.17. 伪类型 章 9. 函数和操作符 9.1. 逻辑操作符 9.2. 比较操作符 9.3. 数学函数和操作符 9.4. 字符串函数和操作符 9.5. 二进制字符串函数和操作符 9.6. 位串函数和操作符 9.7. 模式匹配 9.8. 数据类型格式化函数 9.9. 时间/日期函数和操作符 9.10. 支持枚举函数 9.11. 几何函数和操作符 9.12. 网络地址函数和操作符 9.13. 文本检索函数和操作符 9.14. XML函数 9.15. 序列操作函数 9.16. 条件表达式 9.17. 数组函数和操作符 9.18. 聚合函数 9.19. 窗口函数 9.20. 子查询表达式 9.21. 行和数组比较 9.22. 返回集合的函数 9.23. 系统信息函数 9.24. 系统管理函数 9.25. 触发器函数 章10. 类型转换 10.3. 函数 10.2. 操作符 10.1. 概述 10.4. 值存储 10.5. UNION 章11. 索引 11.1. 介绍 11.2. 索引类型 11.3. 多字段索引 11.4. 索引和ORDER BY 11.5. 组合多个索引 11.6. 唯一索引 11.7. 表达式上的索引 11.8. 部分索引 11.9. 操作类和操作簇 11.10. 检查索引的使用 章12. Full Text Search 12.1. Introduction 12.2. Tables and Indexes 12.3. Controlling Text Search 12.4. Additional Features 12.5. Parsers 12.6. Dictionaries 12.7. Configuration Example 12.8. Testing and Debugging Text Search 12.9. GiST and GIN Index Types 12.10. psql Support 12.11. Limitations 12.12. Migration from Pre-8.3 Text Search 章13. 并发控制 13.1. 介绍 13.2. 事务隔离 13.3. 明确锁定 13.4. 应用层数据完整性检查 13.5. 锁和索引 章14. 性能提升技巧 14.1. 使用EXPLAIN 14.2. 规划器使用的统计信息 14.3. 用明确的JOIN语句控制规划器 14.4. 向数据库中添加记录 14.5. 非持久性设置 III. 服务器管理 章15. 安装指导 15.1. 简版 15.2. 要求 15.3. 获取源码 15.4. 升级 15.5. 安装过程 15.6. 安装后的设置 15.7. 支持的平台 15.8. 特殊平台的要求 章16. Installation from Source Code on Windows 16.1. Building with Visual C++ or the Platform SDK 16.2. Building libpq with Visual C++ or Borland C++ 章17. 服务器安装和操作 17.1. PostgreSQL用户帐户 17.2. 创建数据库集群 17.3. 启动数据库服务器 17.4. 管理内核资源 17.5. 关闭服务 17.6. 防止服务器欺骗 17.7. 加密选项 17.8. 用SSL进行安全的TCP/IP连接 17.9. Secure TCP/IP Connections with SSH Tunnels 章18. 服务器配置 18.1. 设置参数 18.2. 文件位置 18.3. 连接和认证 18.4. 资源消耗 18.5. 预写式日志 18.6. 查询规划 18.7. 错误报告和日志 18.8. 运行时统计 18.9. 自动清理 18.10. 客户端连接缺省 18.12. 版本和平台兼容性 18.11. 锁管理 18.13. 预置选项 18.14. 自定义的选项 18.15. 开发人员选项 18.16. 短选项 章19. 用户认证 19.1. pg_hba.conf 文件 19.2. 用户名映射 19.3. 认证方法 19.4. 用户认证 章20. 数据库角色和权限 20.1. 数据库角色 20.2. 角色属性 20.3. 权限 20.4. 角色成员 20.5. 函数和触发器 章21. 管理数据库 21.1. 概述 21.2. 创建一个数据库 21.3. 临时库 21.4. 数据库配置 21.5. 删除数据库 21.6. 表空间 章22. 本土化 22.1. 区域支持 22.2. 字符集支持 章23. 日常数据库维护工作 23.1. Routine Vacuuming日常清理 23.2. 经常重建索引 23.3. 日志文件维护 章24. 备份和恢复 24.1. SQL转储 24.2. 文件系统级别的备份 24.3. 在线备份以及即时恢复(PITR) 24.4. 版本间迁移 章25. 高可用性与负载均衡,复制 25.1. 不同解决方案的比较 25.2. 日志传送备份服务器 25.3. 失效切换 25.4. 日志传送的替代方法 25.5. 热备 章26. 恢复配置 26.1. 归档恢复设置 26.2. 恢复目标设置 26.3. 备服务器设置 章27. 监控数据库的活动 27.1. 标准Unix工具 27.2. 统计收集器 27.3. 查看锁 27.4. 动态跟踪 章28. 监控磁盘使用情况 28.1. 判断磁盘的使用量 28.2. 磁盘满导致的失效 章29. 可靠性和预写式日志 29.1. 可靠性 29.2. 预写式日志(WAL) 29.3. 异步提交 29.4. WAL配置 29.5. WAL内部 章30. Regression Tests 30.1. Running the Tests 30.2. Test Evaluation 30.3. Variant Comparison Files 30.4. Test Coverage Examination IV. 客户端接口 章31. libpq-C库 31.1. 数据库联接函数 31.2. 连接状态函数 31.3. 命令执行函数 31.4. 异步命令处理 31.5. 取消正在处理的查询 31.6. 捷径接口 31.7. 异步通知 31.8. 与COPY命令相关的函数 31.9. Control Functions 控制函数 31.10. 其他函数 31.11. 注意信息处理 31.12. 事件系统 31.13. 环境变量 31.14. 口令文件 31.15. 连接服务的文件 31.16. LDAP查找连接参数 31.17. SSL支持 31.18. 在多线程程序里的行为 31.19. 制作libpq程序 31.20. 例子程序 章32. 大对象 32.1. 介绍 32.2. 实现特点 32.3. 客户端接口 32.4. 服务器端函数 32.5. 例子程序 章33. ECPG - Embedded SQL in C 33.1. The Concept 33.2. Connecting to the Database Server 33.3. Closing a Connection 33.4. Running SQL Commands 33.5. Choosing a Connection 33.6. Using Host Variables 33.7. Dynamic SQL 33.8. pgtypes library 33.9. Using Descriptor Areas 33.10. Informix compatibility mode 33.11. Error Handling 33.12. Preprocessor directives 33.13. Processing Embedded SQL Programs 33.14. Library Functions 33.15. Internals 章34. 信息模式 34.1. 关于这个模式 34.2. 数据类型 34.3. information_schema_catalog_name 34.4. administrable_role_authorizations 34.5. applicable_roles 34.6. attributes 34.7. check_constraint_routine_usage 34.8. check_constraints 34.9. column_domain_usage 34.10. column_privileges 34.11. column_udt_usage 34.12. 字段 34.13. constraint_column_usage 34.14. constraint_table_usage 34.15. data_type_privileges 34.16. domain_constraints 34.18. domains 34.17. domain_udt_usage 34.19. element_types 34.20. enabled_roles 34.21. foreign_data_wrapper_options 34.22. foreign_data_wrappers 34.23. foreign_server_options 34.24. foreign_servers 34.25. key_column_usage 34.26. parameters 34.27. referential_constraints 34.28. role_column_grants 34.29. role_routine_grants 34.30. role_table_grants 34.31. role_usage_grants 34.32. routine_privileges 34.33. routines 34.34. schemata 34.35. sequences 34.36. sql_features 34.37. sql_implementation_info 34.38. sql_languages 34.39. sql_packages 34.40. sql_parts 34.41. sql_sizing 34.42. sql_sizing_profiles 34.43. table_constraints 34.44. table_privileges 34.45. tables 34.46. triggered_update_columns 34.47. 触发器 34.48. usage_privileges 34.49. user_mapping_options 34.50. user_mappings 34.51. view_column_usage 34.52. view_routine_usage 34.53. view_table_usage 34.54. 视图 V. 服务器端编程 章35. 扩展SQL 35.1. 扩展性是如何实现的 35.2. PostgreSQL类型系统 35.3. User-Defined Functions 35.4. Query Language (SQL) Functions 35.5. Function Overloading 35.6. Function Volatility Categories 35.7. Procedural Language Functions 35.8. Internal Functions 35.9. C-Language Functions 35.10. User-Defined Aggregates 35.11. User-Defined Types 35.12. User-Defined Operators 35.13. Operator Optimization Information 35.14. Interfacing Extensions To Indexes 35.15. 用C++扩展 章36. 触发器 36.1. 触发器行为概述 36.3. 用 C 写触发器 36.2. 数据改变的可视性 36.4. 一个完整的例子 章37. 规则系统 37.1. The Query Tree 37.2. 视图和规则系统 37.3. 在INSERT,UPDATE和DELETE上的规则 37.4. 规则和权限 37.5. 规则和命令状态 37.6. 规则与触发器得比较 章38. Procedural Languages 38.1. Installing Procedural Languages 章39. PL/pgSQL - SQL过程语言 39.1. 概述 39.2. PL/pgSQL的结构 39.3. 声明 39.4. 表达式 39.5. 基本语句 39.6. 控制结构 39.7. 游标 39.8. 错误和消息 39.9. 触发器过程 39.10. PL/pgSQL Under the Hood 39.11. 开发PL/pgSQL的一些提示 39.12. 从OraclePL/SQL 进行移植 章40. PL/Tcl - Tcl Procedural Language 40.1. Overview 40.2. PL/Tcl Functions and Arguments 40.3. Data Values in PL/Tcl 40.4. Global Data in PL/Tcl 40.5. Database Access from PL/Tcl 40.6. Trigger Procedures in PL/Tcl 40.7. Modules and the unknown command 40.8. Tcl Procedure Names 章41. PL/Perl - Perl Procedural Language 41.1. PL/Perl Functions and Arguments 41.2. Data Values in PL/Perl 41.3. Built-in Functions 41.4. Global Values in PL/Perl 41.6. PL/Perl Triggers 41.5. Trusted and Untrusted PL/Perl 41.7. PL/Perl Under the Hood 章42. PL/Python - Python Procedural Language 42.1. Python 2 vs. Python 3 42.2. PL/Python Functions 42.3. Data Values 42.4. Sharing Data 42.5. Anonymous Code Blocks 42.6. Trigger Functions 42.7. Database Access 42.8. Utility Functions 42.9. Environment Variables 章43. Server Programming Interface 43.1. Interface Functions Spi-spi-connect Spi-spi-finish Spi-spi-push Spi-spi-pop Spi-spi-execute Spi-spi-exec Spi-spi-execute-with-args Spi-spi-prepare Spi-spi-prepare-cursor Spi-spi-prepare-params Spi-spi-getargcount Spi-spi-getargtypeid Spi-spi-is-cursor-plan Spi-spi-execute-plan Spi-spi-execute-plan-with-paramlist Spi-spi-execp Spi-spi-cursor-open Spi-spi-cursor-open-with-args Spi-spi-cursor-open-with-paramlist Spi-spi-cursor-find Spi-spi-cursor-fetch Spi-spi-cursor-move Spi-spi-scroll-cursor-fetch Spi-spi-scroll-cursor-move Spi-spi-cursor-close Spi-spi-saveplan 43.2. Interface Support Functions Spi-spi-fname Spi-spi-fnumber Spi-spi-getvalue Spi-spi-getbinval Spi-spi-gettype Spi-spi-gettypeid Spi-spi-getrelname Spi-spi-getnspname 43.3. Memory Management Spi-spi-palloc Spi-realloc Spi-spi-pfree Spi-spi-copytuple Spi-spi-returntuple Spi-spi-modifytuple Spi-spi-freetuple Spi-spi-freetupletable Spi-spi-freeplan 43.4. Visibility of Data Changes 43.5. Examples VI. 参考手册 I. SQL命令 Sql-abort Sql-alteraggregate Sql-alterconversion Sql-alterdatabase Sql-alterdefaultprivileges Sql-alterdomain Sql-alterforeigndatawrapper Sql-alterfunction Sql-altergroup Sql-alterindex Sql-alterlanguage Sql-alterlargeobject Sql-alteroperator Sql-alteropclass Sql-alteropfamily Sql-alterrole Sql-alterschema Sql-altersequence Sql-alterserver Sql-altertable Sql-altertablespace Sql-altertsconfig Sql-altertsdictionary Sql-altertsparser Sql-altertstemplate Sql-altertrigger Sql-altertype Sql-alteruser Sql-alterusermapping Sql-alterview Sql-analyze Sql-begin Sql-checkpoint Sql-close Sql-cluster Sql-comment Sql-commit Sql-commit-prepared Sql-copy Sql-createaggregate Sql-createcast Sql-createconstraint Sql-createconversion Sql-createdatabase Sql-createdomain Sql-createforeigndatawrapper Sql-createfunction Sql-creategroup Sql-createindex Sql-createlanguage Sql-createoperator Sql-createopclass Sql-createopfamily Sql-createrole Sql-createrule Sql-createschema Sql-createsequence Sql-createserver Sql-createtable Sql-createtableas Sql-createtablespace Sql-createtsconfig Sql-createtsdictionary Sql-createtsparser Sql-createtstemplate Sql-createtrigger Sql-createtype Sql-createuser Sql-createusermapping Sql-createview Sql-deallocate Sql-declare Sql-delete Sql-discard Sql-do Sql-dropaggregate Sql-dropcast Sql-dropconversion Sql-dropdatabase Sql-dropdomain Sql-dropforeigndatawrapper Sql-dropfunction Sql-dropgroup Sql-dropindex Sql-droplanguage Sql-dropoperator Sql-dropopclass Sql-dropopfamily Sql-drop-owned Sql-droprole Sql-droprule Sql-dropschema Sql-dropsequence Sql-dropserver Sql-droptable Sql-droptablespace Sql-droptsconfig Sql-droptsdictionary Sql-droptsparser Sql-droptstemplate Sql-droptrigger Sql-droptype Sql-dropuser Sql-dropusermapping Sql-dropview Sql-end Sql-execute Sql-explain Sql-fetch Sql-grant Sql-insert Sql-listen Sql-load Sql-lock Sql-move Sql-notify Sql-prepare Sql-prepare-transaction Sql-reassign-owned Sql-reindex Sql-release-savepoint Sql-reset Sql-revoke Sql-rollback Sql-rollback-prepared Sql-rollback-to Sql-savepoint Sql-select Sql-selectinto Sql-set Sql-set-constraints Sql-set-role Sql-set-session-authorization Sql-set-transaction Sql-show Sql-start-transaction Sql-truncate Sql-unlisten Sql-update Sql-vacuum Sql-values II. 客户端应用程序 App-clusterdb App-createdb App-createlang App-createuser App-dropdb App-droplang App-dropuser App-ecpg App-pgconfig App-pgdump App-pg-dumpall App-pgrestore App-psql App-reindexdb App-vacuumdb III. PostgreSQL服务器应用程序 App-initdb App-pgcontroldata App-pg-ctl App-pgresetxlog App-postgres App-postmaster VII. 内部 章44. PostgreSQL内部概览 44.1. 查询路径 44.2. 连接是如何建立起来的 44.3. 分析器阶段 44.4. ThePostgreSQL规则系统 44.5. 规划器/优化器 44.6. 执行器 章45. 系统表 45.1. 概述 45.2. pg_aggregate 45.3. pg_am 45.4. pg_amop 45.5. pg_amproc 45.6. pg_attrdef 45.7. pg_attribute 45.8. pg_authid 45.9. pg_auth_members 45.10. pg_cast 45.11. pg_class 45.12. pg_constraint 45.13. pg_conversion 45.14. pg_database 45.15. pg_db_role_setting 45.16. pg_default_acl 45.17. pg_depend 45.18. pg_description 45.19. pg_enum 45.20. pg_foreign_data_wrapper 45.21. pg_foreign_server 45.22. pg_index 45.23. pg_inherits 45.24. pg_language 45.25. pg_largeobject 45.26. pg_largeobject_metadata 45.27. pg_namespace 45.28. pg_opclass 45.29. pg_operator 45.30. pg_opfamily 45.31. pg_pltemplate 45.32. pg_proc 45.33. pg_rewrite 45.34. pg_shdepend 45.35. pg_shdescription 45.36. pg_statistic 45.37. pg_tablespace 45.38. pg_trigger 45.39. pg_ts_config 45.40. pg_ts_config_map 45.41. pg_ts_dict 45.42. pg_ts_parser 45.43. pg_ts_template 45.44. pg_type 45.45. pg_user_mapping 45.46. System Views 45.47. pg_cursors 45.48. pg_group 45.49. pg_indexes 45.50. pg_locks 45.51. pg_prepared_statements 45.52. pg_prepared_xacts 45.53. pg_roles 45.54. pg_rules 45.55. pg_settings 45.56. pg_shadow 45.57. pg_stats 45.58. pg_tables 45.59. pg_timezone_abbrevs 45.60. pg_timezone_names 45.61. pg_user 45.62. pg_user_mappings 45.63. pg_views 章46. Frontend/Backend Protocol 46.1. Overview 46.2. Message Flow 46.3. Streaming Replication Protocol 46.4. Message Data Types 46.5. Message Formats 46.6. Error and Notice Message Fields 46.7. Summary of Changes since Protocol 2.0 47. PostgreSQL Coding Conventions 47.1. Formatting 47.2. Reporting Errors Within the Server 47.3. Error Message Style Guide 章48. Native Language Support 48.1. For the Translator 48.2. For the Programmer 章49. Writing A Procedural Language Handler 章50. Genetic Query Optimizer 50.1. Query Handling as a Complex Optimization Problem 50.2. Genetic Algorithms 50.3. Genetic Query Optimization (GEQO) in PostgreSQL 50.4. Further Reading 章51. 索引访问方法接口定义 51.1. 索引的系统表记录 51.2. 索引访问方法函数 51.3. 索引扫描 51.4. 索引锁的考量 51.5. 索引唯一性检查 51.6. 索引开销估计函数 章52. GiST Indexes 52.1. Introduction 52.2. Extensibility 52.3. Implementation 52.4. Examples 52.5. Crash Recovery 章53. GIN Indexes 53.1. Introduction 53.2. Extensibility 53.3. Implementation 53.4. GIN tips and tricks 53.5. Limitations 53.6. Examples 章54. 数据库物理存储 54.1. 数据库文件布局 54.2. TOAST 54.3. 自由空间映射 54.4. 可见映射 54.5. 数据库分页文件 章55. BKI后端接口 55.1. BKI 文件格式 55.2. BKI命令 55.3. 系统初始化的BKI文件的结构 55.4. 例子 章56. 规划器如何使用统计信息 56.1. 行预期的例子 VIII. 附录 A. PostgreSQL错误代码 B. 日期/时间支持 B.1. 日期/时间输入解析 B.2. 日期/时间关键字 B.3. 日期/时间配置文件 B.4. 日期单位的历史 C. SQL关键字 D. SQL Conformance D.1. Supported Features D.2. Unsupported Features E. Release Notes Release-0-01 Release-0-02 Release-0-03 Release-1-0 Release-1-01 Release-1-02 Release-1-09 Release-6-0 Release-6-1 Release-6-1-1 Release-6-2 Release-6-2-1 Release-6-3 Release-6-3-1 Release-6-3-2 Release-6-4 Release-6-4-1 Release-6-4-2 Release-6-5 Release-6-5-1 Release-6-5-2 Release-6-5-3 Release-7-0 Release-7-0-1 Release-7-0-2 Release-7-0-3 Release-7-1 Release-7-1-1 Release-7-1-2 Release-7-1-3 Release-7-2 Release-7-2-1 Release-7-2-2 Release-7-2-3 Release-7-2-4 Release-7-2-5 Release-7-2-6 Release-7-2-7 Release-7-2-8 Release-7-3 Release-7-3-1 Release-7-3-10 Release-7-3-11 Release-7-3-12 Release-7-3-13 Release-7-3-14 Release-7-3-15 Release-7-3-16 Release-7-3-17 Release-7-3-18 Release-7-3-19 Release-7-3-2 Release-7-3-20 Release-7-3-21 Release-7-3-3 Release-7-3-4 Release-7-3-5 Release-7-3-6 Release-7-3-7 Release-7-3-8 Release-7-3-9 Release-7-4 Release-7-4-1 Release-7-4-10 Release-7-4-11 Release-7-4-12 Release-7-4-13 Release-7-4-14 Release-7-4-15 Release-7-4-16 Release-7-4-17 Release-7-4-18 Release-7-4-19 Release-7-4-2 Release-7-4-20 Release-7-4-21 Release-7-4-22 Release-7-4-23 Release-7-4-24 Release-7-4-25 Release-7-4-26 Release-7-4-27 Release-7-4-28 Release-7-4-29 Release-7-4-3 Release-7-4-30 Release-7-4-4 Release-7-4-5 Release-7-4-6 Release-7-4-7 Release-7-4-8 Release-7-4-9 Release-8-0 Release-8-0-1 Release-8-0-10 Release-8-0-11 Release-8-0-12 Release-8-0-13 Release-8-0-14 Release-8-0-15 Release-8-0-16 Release-8-0-17 Release-8-0-18 Release-8-0-19 Release-8-0-2 Release-8-0-20 Release-8-0-21 Release-8-0-22 Release-8-0-23 Release-8-0-24 Release-8-0-25 Release-8-0-26 Release-8-0-3 Release-8-0-4 Release-8-0-5 Release-8-0-6 Release-8-0-7 Release-8-0-8 Release-8-0-9 Release-8-1 Release-8-1-1 Release-8-1-10 Release-8-1-11 Release-8-1-12 Release-8-1-13 Release-8-1-14 Release-8-1-15 Release-8-1-16 Release-8-1-17 Release-8-1-18 Release-8-1-19 Release-8-1-2 Release-8-1-20 Release-8-1-21 Release-8-1-22 Release-8-1-23 Release-8-1-3 Release-8-1-4 Release-8-1-5 Release-8-1-6 Release-8-1-7 Release-8-1-8 Release-8-1-9 Release-8-2 Release-8-2-1 Release-8-2-10 Release-8-2-11 Release-8-2-12 Release-8-2-13 Release-8-2-14 Release-8-2-15 Release-8-2-16 Release-8-2-17 Release-8-2-18 Release-8-2-19 Release-8-2-2 Release-8-2-20 Release-8-2-21 Release-8-2-3 Release-8-2-4 Release-8-2-5 Release-8-2-6 Release-8-2-7 Release-8-2-8 Release-8-2-9 Release-8-3 Release-8-3-1 Release-8-3-10 Release-8-3-11 Release-8-3-12 Release-8-3-13 Release-8-3-14 Release-8-3-15 Release-8-3-2 Release-8-3-3 Release-8-3-4 Release-8-3-5 Release-8-3-6 Release-8-3-7 Release-8-3-8 Release-8-3-9 Release-8-4 Release-8-4-1 Release-8-4-2 Release-8-4-3 Release-8-4-4 Release-8-4-5 Release-8-4-6 Release-8-4-7 Release-8-4-8 Release-9-0 Release-9-0-1 Release-9-0-2 Release-9-0-3 Release-9-0-4 F. 额外提供的模块 F.1. adminpack F.2. auto_explain F.3. btree_gin F.4. btree_gist F.5. chkpass F.6. citext F.7. cube F.8. dblink Contrib-dblink-connect Contrib-dblink-connect-u Contrib-dblink-disconnect Contrib-dblink Contrib-dblink-exec Contrib-dblink-open Contrib-dblink-fetch Contrib-dblink-close Contrib-dblink-get-connections Contrib-dblink-error-message Contrib-dblink-send-query Contrib-dblink-is-busy Contrib-dblink-get-notify Contrib-dblink-get-result Contrib-dblink-cancel-query Contrib-dblink-get-pkey Contrib-dblink-build-sql-insert Contrib-dblink-build-sql-delete Contrib-dblink-build-sql-update F.9. dict_int F.10. dict_xsyn F.11. earthdistance F.12. fuzzystrmatch F.13. hstore F.14. intagg F.15. intarray F.16. isn F.17. lo F.18. ltree F.19. oid2name F.20. pageinspect F.21. passwordcheck F.22. pg_archivecleanup F.23. pgbench F.24. pg_buffercache F.25. pgcrypto F.26. pg_freespacemap F.27. pgrowlocks F.28. pg_standby F.29. pg_stat_statements F.30. pgstattuple F.31. pg_trgm F.32. pg_upgrade F.33. seg F.34. spi F.35. sslinfo F.36. tablefunc F.37. test_parser F.38. tsearch2 F.39. unaccent F.40. uuid-ossp F.41. vacuumlo F.42. xml2 G. 外部项目 G.1. 客户端接口 G.2. 过程语言 G.3. 扩展 H. The Source Code Repository H.1. Getting The Source Via Git I. 文档 I.1. DocBook I.2. 工具集 I.3. 制作文档 I.4. 文档写作 I.5. 风格指导 J. 首字母缩略词 参考书目 Bookindex Index
문자

F.36. tablefunc

The tablefunc module includes various functions that return tables (that is, multiple rows). These functions are useful both in their own right and as examples of how to write C functions that return multiple rows.

F.36.1. Functions Provided

Table F-27 shows the functions provided by the tablefunc module.

Table F-27. tablefunc functions

Function Returns Description
normal_rand(int numvals, float8 mean, float8 stddev) setof float8 Produces a set of normally distributed random values
crosstab(text sql) setof record Produces a "pivot table" containing row names plus N value columns, where N is determined by the row type specified in the calling query
crosstabN(text sql) setof table_crosstab_N Produces a "pivot table" containing row names plus N value columns. crosstab2, crosstab3, and crosstab4 are predefined, but you can create additional crosstabN functions as described below
crosstab(text source_sql, text category_sql) setof record Produces a "pivot table" with the value columns specified by a second query
crosstab(text sql, int N) setof record

Obsolete version of crosstab(text). The parameter N is now ignored, since the number of value columns is always determined by the calling query

connectby(text relname, text keyid_fld, text parent_keyid_fld [, text orderby_fld ], text start_with, int max_depth [, text branch_delim ]) setof record Produces a representation of a hierarchical tree structure

F.36.1.1. normal_rand

normal_rand(int numvals, float8 mean, float8 stddev) returns setof float8

normal_rand produces a set of normally distributed random values (Gaussian distribution).

numvals is the number of values to be returned from the function. mean is the mean of the normal distribution of values and stddev is the standard deviation of the normal distribution of values.

For example, this call requests 1000 values with a mean of 5 and a standard deviation of 3:

test=# SELECT * FROM normal_rand(1000, 5, 3);
     normal_rand
----------------------
     1.56556322244898
     9.10040991424657
     5.36957140345079
   -0.369151492880995
    0.283600703686639
       .
       .
       .
     4.82992125404908
     9.71308014517282
     2.49639286969028
(1000 rows)

F.36.1.2. crosstab(text)

crosstab(text sql)
crosstab(text sql, int N)

The crosstab function is used to produce "pivot" displays, wherein data is listed across the page rather than down. For example, we might have data like

row1    val11
row1    val12
row1    val13
...
row2    val21
row2    val22
row2    val23
...

which we wish to display like

row1    val11   val12   val13   ...
row2    val21   val22   val23   ...
...

The crosstab function takes a text parameter that is a SQL query producing raw data formatted in the first way, and produces a table formatted in the second way.

The sql parameter is a SQL statement that produces the source set of data. This statement must return one row_name column, one category column, and one value column. N is an obsolete parameter, ignored if supplied (formerly this had to match the number of output value columns, but now that is determined by the calling query).

For example, the provided query might produce a set something like:

 row_name    cat    value
----------+-------+-------
  row1      cat1    val1
  row1      cat2    val2
  row1      cat3    val3
  row1      cat4    val4
  row2      cat1    val5
  row2      cat2    val6
  row2      cat3    val7
  row2      cat4    val8

The crosstab function is declared to return setof record, so the actual names and types of the output columns must be defined in the FROM clause of the calling SELECT statement, for example:

SELECT * FROM crosstab('...') AS ct(row_name text, category_1 text, category_2 text);

This example produces a set something like:

           <== value  columns  ==>
 row_name   category_1   category_2
----------+------------+------------
  row1        val1         val2
  row2        val5         val6

The FROM clause must define the output as one row_name column (of the same data type as the first result column of the SQL query) followed by N value columns (all of the same data type as the third result column of the SQL query). You can set up as many output value columns as you wish. The names of the output columns are up to you.

The crosstab function produces one output row for each consecutive group of input rows with the same row_name value. It fills the output value columns, left to right, with the value fields from these rows. If there are fewer rows in a group than there are output value columns, the extra output columns are filled with nulls; if there are more rows, the extra input rows are skipped.

In practice the SQL query should always specify ORDER BY 1,2 to ensure that the input rows are properly ordered, that is, values with the same row_name are brought together and correctly ordered within the row. Notice that crosstab itself does not pay any attention to the second column of the query result; it's just there to be ordered by, to control the order in which the third-column values appear across the page.

Here is a complete example:

CREATE TABLE ct(id SERIAL, rowid TEXT, attribute TEXT, value TEXT);
INSERT INTO ct(rowid, attribute, value) VALUES('test1','att1','val1');
INSERT INTO ct(rowid, attribute, value) VALUES('test1','att2','val2');
INSERT INTO ct(rowid, attribute, value) VALUES('test1','att3','val3');
INSERT INTO ct(rowid, attribute, value) VALUES('test1','att4','val4');
INSERT INTO ct(rowid, attribute, value) VALUES('test2','att1','val5');
INSERT INTO ct(rowid, attribute, value) VALUES('test2','att2','val6');
INSERT INTO ct(rowid, attribute, value) VALUES('test2','att3','val7');
INSERT INTO ct(rowid, attribute, value) VALUES('test2','att4','val8');

SELECT *
FROM crosstab(
  'select rowid, attribute, value
   from ct
   where attribute = ''att2'' or attribute = ''att3''
   order by 1,2')
AS ct(row_name text, category_1 text, category_2 text, category_3 text);

 row_name | category_1 | category_2 | category_3
----------+------------+------------+------------
 test1    | val2       | val3       |
 test2    | val6       | val7       |
(2 rows)

You can avoid always having to write out a FROM clause to define the output columns, by setting up a custom crosstab function that has the desired output row type wired into its definition. This is described in the next section. Another possibility is to embed the required FROM clause in a view definition.

F.36.1.3. crosstabN(text)

crosstabN(text sql)

The crosstabN functions are examples of how to set up custom wrappers for the general crosstab function, so that you need not write out column names and types in the calling SELECT query. The tablefunc module includes crosstab2, crosstab3, and crosstab4, whose output row types are defined as

CREATE TYPE tablefunc_crosstab_N AS (
    row_name TEXT,
    category_1 TEXT,
    category_2 TEXT,
        .
        .
        .
    category_N TEXT
);

Thus, these functions can be used directly when the input query produces row_name and value columns of type text, and you want 2, 3, or 4 output values columns. In all other ways they behave exactly as described above for the general crosstab function.

For instance, the example given in the previous section would also work as

SELECT *
FROM crosstab3(
  'select rowid, attribute, value
   from ct
   where attribute = ''att2'' or attribute = ''att3''
   order by 1,2');

These functions are provided mostly for illustration purposes. You can create your own return types and functions based on the underlying crosstab() function. There are two ways to do it:

  • Create a composite type describing the desired output columns, similar to the examples in the installation script. Then define a unique function name accepting one text parameter and returning setof your_type_name, but linking to the same underlying crosstab C function. For example, if your source data produces row names that are text, and values that are float8, and you want 5 value columns:

    CREATE TYPE my_crosstab_float8_5_cols AS (
        my_row_name text,
        my_category_1 float8,
        my_category_2 float8,
        my_category_3 float8,
        my_category_4 float8,
        my_category_5 float8
    );
    
    CREATE OR REPLACE FUNCTION crosstab_float8_5_cols(text)
        RETURNS setof my_crosstab_float8_5_cols
        AS '$libdir/tablefunc','crosstab' LANGUAGE C STABLE STRICT;

  • Use OUT parameters to define the return type implicitly. The same example could also be done this way:

    CREATE OR REPLACE FUNCTION crosstab_float8_5_cols(
        IN text,
        OUT my_row_name text,
        OUT my_category_1 float8,
        OUT my_category_2 float8,
        OUT my_category_3 float8,
        OUT my_category_4 float8,
        OUT my_category_5 float8)
      RETURNS setof record
      AS '$libdir/tablefunc','crosstab' LANGUAGE C STABLE STRICT;

F.36.1.4. crosstab(text, text)

crosstab(text source_sql, text category_sql)

The main limitation of the single-parameter form of crosstab is that it treats all values in a group alike, inserting each value into the first available column. If you want the value columns to correspond to specific categories of data, and some groups might not have data for some of the categories, that doesn't work well. The two-parameter form of crosstab handles this case by providing an explicit list of the categories corresponding to the output columns.

source_sql is a SQL statement that produces the source set of data. This statement must return one row_name column, one category column, and one value column. It may also have one or more "extra" columns. The row_name column must be first. The category and value columns must be the last two columns, in that order. Any columns between row_name and category are treated as "extra". The "extra" columns are expected to be the same for all rows with the same row_name value.

For example, source_sql might produce a set something like:

 SELECT row_name, extra_col, cat, value FROM foo ORDER BY 1;

     row_name    extra_col   cat    value
    ----------+------------+-----+---------
      row1         extra1    cat1    val1
      row1         extra1    cat2    val2
      row1         extra1    cat4    val4
      row2         extra2    cat1    val5
      row2         extra2    cat2    val6
      row2         extra2    cat3    val7
      row2         extra2    cat4    val8

category_sql is a SQL statement that produces the set of categories. This statement must return only one column. It must produce at least one row, or an error will be generated. Also, it must not produce duplicate values, or an error will be generated. category_sql might be something like:

SELECT DISTINCT cat FROM foo ORDER BY 1;
    cat
  -------
    cat1
    cat2
    cat3
    cat4

The crosstab function is declared to return setof record, so the actual names and types of the output columns must be defined in the FROM clause of the calling SELECT statement, for example:

SELECT * FROM crosstab('...', '...')
    AS ct(row_name text, extra text, cat1 text, cat2 text, cat3 text, cat4 text);

This will produce a result something like:

                  <==  value  columns   ==>
row_name   extra   cat1   cat2   cat3   cat4
---------+-------+------+------+------+------
  row1     extra1  val1   val2          val4
  row2     extra2  val5   val6   val7   val8

The FROM clause must define the proper number of output columns of the proper data types. If there are N columns in the source_sql query's result, the first N-2 of them must match up with the first N-2 output columns. The remaining output columns must have the type of the last column of the source_sql query's result, and there must be exactly as many of them as there are rows in the category_sql query's result.

The crosstab function produces one output row for each consecutive group of input rows with the same row_name value. The output row_name column, plus any "extra" columns, are copied from the first row of the group. The output value columns are filled with the value fields from rows having matching category values. If a row's category does not match any output of the category_sql query, its value is ignored. Output columns whose matching category is not present in any input row of the group are filled with nulls.

In practice the source_sql query should always specify ORDER BY 1 to ensure that values with the same row_name are brought together. However, ordering of the categories within a group is not important. Also, it is essential to be sure that the order of the category_sql query's output matches the specified output column order.

Here are two complete examples:

create table sales(year int, month int, qty int);
insert into sales values(2007, 1, 1000);
insert into sales values(2007, 2, 1500);
insert into sales values(2007, 7, 500);
insert into sales values(2007, 11, 1500);
insert into sales values(2007, 12, 2000);
insert into sales values(2008, 1, 1000);

select * from crosstab(
  'select year, month, qty from sales order by 1',
  'select m from generate_series(1,12) m'
) as (
  year int,
  "Jan" int,
  "Feb" int,
  "Mar" int,
  "Apr" int,
  "May" int,
  "Jun" int,
  "Jul" int,
  "Aug" int,
  "Sep" int,
  "Oct" int,
  "Nov" int,
  "Dec" int
);
 year | Jan  | Feb  | Mar | Apr | May | Jun | Jul | Aug | Sep | Oct | Nov  | Dec
------+------+------+-----+-----+-----+-----+-----+-----+-----+-----+------+------
 2007 | 1000 | 1500 |     |     |     |     | 500 |     |     |     | 1500 | 2000
 2008 | 1000 |      |     |     |     |     |     |     |     |     |      |
(2 rows)

CREATE TABLE cth(rowid text, rowdt timestamp, attribute text, val text);
INSERT INTO cth VALUES('test1','01 March 2003','temperature','42');
INSERT INTO cth VALUES('test1','01 March 2003','test_result','PASS');
INSERT INTO cth VALUES('test1','01 March 2003','volts','2.6987');
INSERT INTO cth VALUES('test2','02 March 2003','temperature','53');
INSERT INTO cth VALUES('test2','02 March 2003','test_result','FAIL');
INSERT INTO cth VALUES('test2','02 March 2003','test_startdate','01 March 2003');
INSERT INTO cth VALUES('test2','02 March 2003','volts','3.1234');

SELECT * FROM crosstab
(
  'SELECT rowid, rowdt, attribute, val FROM cth ORDER BY 1',
  'SELECT DISTINCT attribute FROM cth ORDER BY 1'
)
AS
(
       rowid text,
       rowdt timestamp,
       temperature int4,
       test_result text,
       test_startdate timestamp,
       volts float8
);
 rowid |          rowdt           | temperature | test_result |      test_startdate      | volts
-------+--------------------------+-------------+-------------+--------------------------+--------
 test1 | Sat Mar 01 00:00:00 2003 |          42 | PASS        |                          | 2.6987
 test2 | Sun Mar 02 00:00:00 2003 |          53 | FAIL        | Sat Mar 01 00:00:00 2003 | 3.1234
(2 rows)

You can create predefined functions to avoid having to write out the result column names and types in each query. See the examples in the previous section. The underlying C function for this form of crosstab is named crosstab_hash.

F.36.1.5. connectby

connectby(text relname, text keyid_fld, text parent_keyid_fld
          [, text orderby_fld ], text start_with, int max_depth
          [, text branch_delim ])

The connectby function produces a display of hierarchical data that is stored in a table. The table must have a key field that uniquely identifies rows, and a parent-key field that references the parent (if any) of each row. connectby can display the sub-tree descending from any row.

Table F-28 explains the parameters.

Table F-28. connectby parameters

Parameter Description
relname Name of the source relation
keyid_fld Name of the key field
parent_keyid_fld Name of the parent-key field
orderby_fld Name of the field to order siblings by (optional)
start_with Key value of the row to start at
max_depth Maximum depth to descend to, or zero for unlimited depth
branch_delim String to separate keys with in branch output (optional)

The key and parent-key fields can be any data type, but they must be the same type. Note that the start_with value must be entered as a text string, regardless of the type of the key field.

The connectby function is declared to return setof record, so the actual names and types of the output columns must be defined in the FROM clause of the calling SELECT statement, for example:

SELECT * FROM connectby('connectby_tree', 'keyid', 'parent_keyid', 'pos', 'row2', 0, '~')
    AS t(keyid text, parent_keyid text, level int, branch text, pos int);

The first two output columns are used for the current row's key and its parent row's key; they must match the type of the table's key field. The third output column is the depth in the tree and must be of type integer. If a branch_delim parameter was given, the next output column is the branch display and must be of type text. Finally, if an orderby_fld parameter was given, the last output column is a serial number, and must be of type integer.

The "branch" output column shows the path of keys taken to reach the current row. The keys are separated by the specified branch_delim string. If no branch display is wanted, omit both the branch_delim parameter and the branch column in the output column list.

If the ordering of siblings of the same parent is important, include the orderby_fld parameter to specify which field to order siblings by. This field can be of any sortable data type. The output column list must include a final integer serial-number column, if and only if orderby_fld is specified.

The parameters representing table and field names are copied as-is into the SQL queries that connectby generates internally. Therefore, include double quotes if the names are mixed-case or contain special characters. You may also need to schema-qualify the table name.

In large tables, performance will be poor unless there is an index on the parent-key field.

It is important that the branch_delim string not appear in any key values, else connectby may incorrectly report an infinite-recursion error. Note that if branch_delim is not provided, a default value of ~ is used for recursion detection purposes.

Here is an example:

CREATE TABLE connectby_tree(keyid text, parent_keyid text, pos int);

INSERT INTO connectby_tree VALUES('row1',NULL, 0);
INSERT INTO connectby_tree VALUES('row2','row1', 0);
INSERT INTO connectby_tree VALUES('row3','row1', 0);
INSERT INTO connectby_tree VALUES('row4','row2', 1);
INSERT INTO connectby_tree VALUES('row5','row2', 0);
INSERT INTO connectby_tree VALUES('row6','row4', 0);
INSERT INTO connectby_tree VALUES('row7','row3', 0);
INSERT INTO connectby_tree VALUES('row8','row6', 0);
INSERT INTO connectby_tree VALUES('row9','row5', 0);

-- with branch, without orderby_fld (order of results is not guaranteed)
SELECT * FROM connectby('connectby_tree', 'keyid', 'parent_keyid', 'row2', 0, '~')
 AS t(keyid text, parent_keyid text, level int, branch text);
 keyid | parent_keyid | level |       branch
-------+--------------+-------+---------------------
 row2  |              |     0 | row2
 row4  | row2         |     1 | row2~row4
 row6  | row4         |     2 | row2~row4~row6
 row8  | row6         |     3 | row2~row4~row6~row8
 row5  | row2         |     1 | row2~row5
 row9  | row5         |     2 | row2~row5~row9
(6 rows)

-- without branch, without orderby_fld (order of results is not guaranteed)
SELECT * FROM connectby('connectby_tree', 'keyid', 'parent_keyid', 'row2', 0)
 AS t(keyid text, parent_keyid text, level int);
 keyid | parent_keyid | level
-------+--------------+-------
 row2  |              |     0
 row4  | row2         |     1
 row6  | row4         |     2
 row8  | row6         |     3
 row5  | row2         |     1
 row9  | row5         |     2
(6 rows)

-- with branch, with orderby_fld (notice that row5 comes before row4)
SELECT * FROM connectby('connectby_tree', 'keyid', 'parent_keyid', 'pos', 'row2', 0, '~')
 AS t(keyid text, parent_keyid text, level int, branch text, pos int);
 keyid | parent_keyid | level |       branch        | pos
-------+--------------+-------+---------------------+-----
 row2  |              |     0 | row2                |   1
 row5  | row2         |     1 | row2~row5           |   2
 row9  | row5         |     2 | row2~row5~row9      |   3
 row4  | row2         |     1 | row2~row4           |   4
 row6  | row4         |     2 | row2~row4~row6      |   5
 row8  | row6         |     3 | row2~row4~row6~row8 |   6
(6 rows)

-- without branch, with orderby_fld (notice that row5 comes before row4)
SELECT * FROM connectby('connectby_tree', 'keyid', 'parent_keyid', 'pos', 'row2', 0)
 AS t(keyid text, parent_keyid text, level int, pos int);
 keyid | parent_keyid | level | pos
-------+--------------+-------+-----
 row2  |              |     0 |   1
 row5  | row2         |     1 |   2
 row9  | row5         |     2 |   3
 row4  | row2         |     1 |   4
 row6  | row4         |     2 |   5
 row8  | row6         |     3 |   6
(6 rows)

F.36.2. Author

Joe Conway

이전 기사: 다음 기사: