Home Backend Development C++ The Quest for Performance Part III : C Force

The Quest for Performance Part III : C Force

Aug 06, 2024 am 01:10 AM

The Quest for Performance Part III : C Force

In the two prior installments of this series, we considered the performance of floating operations in Perl,
Python and R in a toy example that computed the function cos(sin(sqrt(x))), where x was a very large array of 50M double precision floating numbers.
Hybrid implementations that delegated the arithmetic intensive part to C were among the most performant implementations. In this installment, we will digress slightly and look at the performance of a pure C code implementation of the toy example.
The C code will provide further insights about the importance of memory locality for performance (by default elements in a C array are stored in sequential addresses in memory, and numerical APIs such as PDL or numpy interface with such containers) vis-a-vis containers,
e.g. Perl arrays which do not store their values in sequential addresses in memory. Last, but certainly not least, the C code implementations will allow us to assess whether flags related to floating point operations for the low level compiler (in this case gcc) can affect performance.
This point is worth emphasizing: common mortals are entirely dependent on the choice of compiler flags when "piping" their "install" or building their Inline file. If one does not touch these flags, then one will be blissfully unaware of what they may missing, or pitfalls they may be avoiding.
The humble C file makefile allows one to make such performance evaluations explicitly.

The C code for our toy example is listed in its entirety below. The code is rather self-explanatory, so will not spend time explaining other than pointing out that it contains four functions for

  • Non-sequential calculation of the expensive function : all three floating pointing operations take place inside a single loop using one thread
  • Sequential calculations of the expensive function : each of the 3 floating point function evaluations takes inside a separate loop using one thread
  • Non-sequential OpenMP code : threaded version of the non-sequential code
  • Sequential OpenMP code: threaded of the sequential code

In this case, one may hope that the compiler is smart enough to recognize that the square root maps to packed (vectorized) floating pointing operations in assembly, so that one function can be vectorized using the appropriate SIMD instructions (note we did not use the simd program for the OpenMP codes).
Perhaps the speedup from the vectorization may offset the loss of performance from repeatedly accessing the same memory locations (or not).

#include <stdlib.h>
#include <string.h>
#include <math.h>
#include <stdio.h>
#include <omp.h>

// simulates a large array of random numbers
double*  simulate_array(int num_of_elements,int seed);
// OMP environment functions
void _set_openmp_schedule_from_env();
void _set_num_threads_from_env();



// functions to modify C arrays 
void map_c_array(double* array, int len);
void map_c_array_sequential(double* array, int len);
void map_C_array_using_OMP(double* array, int len);
void map_C_array_sequential_using_OMP(double* array, int len);

int main(int argc, char *argv[]) {
    if (argc != 2) {
        printf("Usage: %s <array_size>\n", argv[0]);
        return 1;
    }

    int array_size = atoi(argv[1]);
    // printf the array size
    printf("Array size: %d\n", array_size);
    double *array = simulate_array(array_size, 1234);

    // Set OMP environment
    _set_openmp_schedule_from_env();
    _set_num_threads_from_env();

    // Perform calculations and collect timing data
    double start_time, end_time, elapsed_time;
    // Non-Sequential calculation
    start_time = omp_get_wtime();
    map_c_array(array, array_size);
    end_time = omp_get_wtime();
    elapsed_time = end_time - start_time;
    printf("Non-sequential calculation time: %f seconds\n", elapsed_time);
    free(array);

    // Sequential calculation
    array = simulate_array(array_size, 1234);
    start_time = omp_get_wtime();
    map_c_array_sequential(array, array_size);
    end_time = omp_get_wtime();
    elapsed_time = end_time - start_time;
    printf("Sequential calculation time: %f seconds\n", elapsed_time);
    free(array);

    array = simulate_array(array_size, 1234);
    // Parallel calculation using OMP
    start_time = omp_get_wtime();
    map_C_array_using_OMP(array, array_size);
    end_time = omp_get_wtime();
    elapsed_time = end_time - start_time;
    printf("Parallel calculation using OMP time: %f seconds\n", elapsed_time);
    free(array);

    // Sequential calculation using OMP
    array = simulate_array(array_size, 1234);
    start_time = omp_get_wtime();
    map_C_array_sequential_using_OMP(array, array_size);
    end_time = omp_get_wtime();
    elapsed_time = end_time - start_time;
    printf("Sequential calculation using OMP time: %f seconds\n", elapsed_time);

    free(array);
    return 0;
}



/*
*******************************************************************************
* OMP environment functions
*******************************************************************************
*/
void _set_openmp_schedule_from_env() {
  char *schedule_env = getenv("OMP_SCHEDULE");
  printf("Schedule from env %s\n", getenv("OMP_SCHEDULE"));
  if (schedule_env != NULL) {
    char *kind_str = strtok(schedule_env, ",");
    char *chunk_size_str = strtok(NULL, ",");

    omp_sched_t kind;
    if (strcmp(kind_str, "static") == 0) {
      kind = omp_sched_static;
    } else if (strcmp(kind_str, "dynamic") == 0) {
      kind = omp_sched_dynamic;
    } else if (strcmp(kind_str, "guided") == 0) {
      kind = omp_sched_guided;
    } else {
      kind = omp_sched_auto;
    }
    int chunk_size = atoi(chunk_size_str);
    omp_set_schedule(kind, chunk_size);
  }
}

void _set_num_threads_from_env() {
  char *num = getenv("OMP_NUM_THREADS");
  printf("Number of threads = %s from within C\n", num);
  omp_set_num_threads(atoi(num));
}
/*
*******************************************************************************
* Functions that modify C arrays whose address is passed from Perl in C
*******************************************************************************
*/

double*  simulate_array(int num_of_elements, int seed) {
  srand(seed); // Seed the random number generator
  double *array = (double *)malloc(num_of_elements * sizeof(double));
  for (int i = 0; i < num_of_elements; i++) {
    array[i] =
        (double)rand() / RAND_MAX; // Generate a random double between 0 and 1
  }
  return array;
}

void map_c_array(double *array, int len) {
  for (int i = 0; i < len; i++) {
    array[i] = cos(sin(sqrt(array[i])));
  }
}

void map_c_array_sequential(double* array, int len) {
  for (int i = 0; i < len; i++) {
    array[i] = sqrt(array[i]);
  }
  for (int i = 0; i < len; i++) {
    array[i] = sin(array[i]);
  }
  for (int i = 0; i < len; i++) {
    array[i] = cos(array[i]);
  }
}

void map_C_array_using_OMP(double* array, int len) {
#pragma omp parallel
  {
#pragma omp for schedule(runtime) nowait
    for (int i = 0; i < len; i++) {
      array[i] = cos(sin(sqrt(array[i])));
    }
  }
}

void map_C_array_sequential_using_OMP(double* array, int len) {
#pragma omp parallel
  {
#pragma omp for schedule(runtime) nowait
    for (int i = 0; i < len; i++) {
      array[i] = sqrt(array[i]);
    }
#pragma omp for schedule(runtime) nowait
    for (int i = 0; i < len; i++) {
      array[i] = sin(array[i]);
    }
#pragma omp for schedule(runtime) nowait
    for (int i = 0; i < len; i++) {
      array[i] = cos(array[i]);
    }
  }
}

A critical question is whether the use of fast floating compiler flags, a trick that trades speed for accuracy of the code, can affect performance.
Here is the makefile withut this compiler flag

CC = gcc
CFLAGS = -O3 -ftree-vectorize  -march=native  -Wall -std=gnu11 -fopenmp -fstrict-aliasing 
LDFLAGS = -fPIE -fopenmp
LIBS =  -lm

SOURCES = inplace_array_mod_with_OpenMP.c
OBJECTS = $(SOURCES:.c=_noffmath_gcc.o)
EXECUTABLE = inplace_array_mod_with_OpenMP_noffmath_gcc

all: $(SOURCES) $(EXECUTABLE)

clean:
    rm -f $(OBJECTS) $(EXECUTABLE)

$(EXECUTABLE): $(OBJECTS)
    $(CC) $(LDFLAGS) $(OBJECTS) $(LIBS) -o $@

%_noffmath_gcc.o : %.c 
    $(CC) $(CFLAGS) -c $< -o $@

and here is the one with this flag:

CC = gcc
CFLAGS = -O3 -ftree-vectorize  -march=native -Wall -std=gnu11 -fopenmp -fstrict-aliasing -ffast-math
LDFLAGS = -fPIE -fopenmp
LIBS =  -lm

SOURCES = inplace_array_mod_with_OpenMP.c
OBJECTS = $(SOURCES:.c=_gcc.o)
EXECUTABLE = inplace_array_mod_with_OpenMP_gcc

all: $(SOURCES) $(EXECUTABLE)

clean:
    rm -f $(OBJECTS) $(EXECUTABLE)

$(EXECUTABLE): $(OBJECTS)
    $(CC) $(LDFLAGS) $(OBJECTS) $(LIBS) -o $@

%_gcc.o : %.c 
    $(CC) $(CFLAGS) -c $< -o $@

And here are the results of running these two programs

  • Without -ffast-math
OMP_SCHEDULE=guided,1 OMP_NUM_THREADS=8 ./inplace_array_mod_with_OpenMP_noffmath_gcc 50000000
Array size: 50000000
Schedule from env guided,1
Number of threads = 8 from within C
Non-sequential calculation time: 1.12 seconds
Sequential calculation time: 0.95 seconds
Parallel calculation using OMP time: 0.17 seconds
Sequential calculation using OMP time: 0.15 seconds
  • With -ffast-math
OMP_SCHEDULE=guided,1 OMP_NUM_THREADS=8 ./inplace_array_mod_with_OpenMP_gcc 50000000
Array size: 50000000
Schedule from env guided,1
Number of threads = 8 from within C
Non-sequential calculation time: 0.27 seconds
Sequential calculation time: 0.28 seconds
Parallel calculation using OMP time: 0.05 seconds
Sequential calculation using OMP time: 0.06 seconds

Note that one can use the fastmath in Numba code as follows (the default is fastmath=False):

@njit(nogil=True,fastmath=True)
def compute_inplace_with_numba(array):
    np.sqrt(array,array)
    np.sin(array,array)
    np.cos(array,array)

A few points that are worth noting:

  • The -ffast-math gives major boost in performance (about 300% for both the single threaded and the multi-threaded code), but it can generate erroneous results
  • Fastmath also works in Numba, but should be avoided for the same reasons it should be avoided in any application that strives for accuracy
  • The sequential C single threaded code gives performance similar to the single threaded PDL and Numpy
  • Somewhat surprisingly, the sequential code is about 20% faster than the non-sequential code when the correct (non-fast) math is used.
  • Unsurprisingly, multi-threaded code is faster than single threaded code :)
  • I still cannot explain how numbas delivers a 50% performance premium over the C code of this rather simple function.

title: " The Quest for Performance Part III : C Force "

日付: 2024-07-07

このシリーズの前 2 回の記事では、Perl での浮動演算のパフォーマンスについて検討しました。
関数 cos(sin(sqrt(x))) を計算するおもちゃの例では、Python と R が使用されています。ここで、x は 50M の倍精度浮動小数点数の 非常に大きな 配列です。
算術集中部分を C に委任したハイブリッド実装は、最もパフォーマンスの高い実装の 1 つでした。今回は少し脱線して、おもちゃのサンプルの純粋な C コード実装のパフォーマンスを見ていきます。
C コードは、パフォーマンスに対するメモリの局所性の重要性について、コンテナと比較してさらに洞察を提供します (デフォルトでは、C 配列の要素はメモリ内の連続したアドレスに格納され、PDL やそのようなコンテナとの numpy インターフェイスなどの数値 API) 、
例えばメモリ内の連続したアドレスに値を格納しない Perl 配列。最後に、重要なことですが、C コードの実装により、低レベル コンパイラ (この場合は gcc) の浮動小数点演算に関連するフラグがパフォーマンスに影響を与えるかどうかを評価できるようになります。
この点は強調する価値があります。一般的な人間は、「インストール」を「パイプ」するとき、またはインライン ファイルをビルドするときのコンパイラ フラグの選択に完全に依存しています。これらのフラグに触れなければ、何が欠けているのか、あるいは避けている落とし穴があるのか​​、幸いにも気づかないことになるでしょう。
謙虚な C ファイル makefile を使用すると、このようなパフォーマンス評価を明示的に行うことができます。

おもちゃの例の C コード全体を以下に示します。このコードは一目瞭然なので、

に対する 4 つの関数が含まれていることを指摘する以外は説明に時間を費やすことはありません。
  • 高価な関数の非順次計算: 3 つの浮動小数点演算はすべて、1 つのスレッドを使用して単一のループ内で実行されます
  • 高価な関数の逐次計算: 3 つの浮動小数点関数の評価はそれぞれ、1 つのスレッドを使用して別個のループ内で行われます
  • ノンシーケンシャル OpenMP コード: ノンシーケンシャル コードのスレッド バージョン
  • シーケンシャル OpenMP コード: シーケンシャル コードのスレッド化

この場合、コンパイラーが、平方根がアセンブリ内のパックされた (ベクトル化された) 浮動小数点演算にマップされることを認識するのに十分賢いので、適切な SIMD 命令を使用して 1 つの関数をベクトル化できることを期待するかもしれません (実際に実行したことに注意してください) OpenMP コードには simd プログラムを使用しないでください)。
おそらく、ベクトル化による高速化により、同じメモリ位置に繰り返しアクセスする (またはしない) ことによるパフォーマンスの低下が相殺される可能性があります。

#include <stdlib.h>
#include <string.h>
#include <math.h>
#include <stdio.h>
#include <omp.h>

// simulates a large array of random numbers
double*  simulate_array(int num_of_elements,int seed);
// OMP environment functions
void _set_openmp_schedule_from_env();
void _set_num_threads_from_env();



// functions to modify C arrays 
void map_c_array(double* array, int len);
void map_c_array_sequential(double* array, int len);
void map_C_array_using_OMP(double* array, int len);
void map_C_array_sequential_using_OMP(double* array, int len);

int main(int argc, char *argv[]) {
    if (argc != 2) {
        printf("Usage: %s <array_size>\n", argv[0]);
        return 1;
    }

    int array_size = atoi(argv[1]);
    // printf the array size
    printf("Array size: %d\n", array_size);
    double *array = simulate_array(array_size, 1234);

    // Set OMP environment
    _set_openmp_schedule_from_env();
    _set_num_threads_from_env();

    // Perform calculations and collect timing data
    double start_time, end_time, elapsed_time;
    // Non-Sequential calculation
    start_time = omp_get_wtime();
    map_c_array(array, array_size);
    end_time = omp_get_wtime();
    elapsed_time = end_time - start_time;
    printf("Non-sequential calculation time: %f seconds\n", elapsed_time);
    free(array);

    // Sequential calculation
    array = simulate_array(array_size, 1234);
    start_time = omp_get_wtime();
    map_c_array_sequential(array, array_size);
    end_time = omp_get_wtime();
    elapsed_time = end_time - start_time;
    printf("Sequential calculation time: %f seconds\n", elapsed_time);
    free(array);

    array = simulate_array(array_size, 1234);
    // Parallel calculation using OMP
    start_time = omp_get_wtime();
    map_C_array_using_OMP(array, array_size);
    end_time = omp_get_wtime();
    elapsed_time = end_time - start_time;
    printf("Parallel calculation using OMP time: %f seconds\n", elapsed_time);
    free(array);

    // Sequential calculation using OMP
    array = simulate_array(array_size, 1234);
    start_time = omp_get_wtime();
    map_C_array_sequential_using_OMP(array, array_size);
    end_time = omp_get_wtime();
    elapsed_time = end_time - start_time;
    printf("Sequential calculation using OMP time: %f seconds\n", elapsed_time);

    free(array);
    return 0;
}



/*
*******************************************************************************
* OMP environment functions
*******************************************************************************
*/
void _set_openmp_schedule_from_env() {
  char *schedule_env = getenv("OMP_SCHEDULE");
  printf("Schedule from env %s\n", getenv("OMP_SCHEDULE"));
  if (schedule_env != NULL) {
    char *kind_str = strtok(schedule_env, ",");
    char *chunk_size_str = strtok(NULL, ",");

    omp_sched_t kind;
    if (strcmp(kind_str, "static") == 0) {
      kind = omp_sched_static;
    } else if (strcmp(kind_str, "dynamic") == 0) {
      kind = omp_sched_dynamic;
    } else if (strcmp(kind_str, "guided") == 0) {
      kind = omp_sched_guided;
    } else {
      kind = omp_sched_auto;
    }
    int chunk_size = atoi(chunk_size_str);
    omp_set_schedule(kind, chunk_size);
  }
}

void _set_num_threads_from_env() {
  char *num = getenv("OMP_NUM_THREADS");
  printf("Number of threads = %s from within C\n", num);
  omp_set_num_threads(atoi(num));
}
/*
*******************************************************************************
* Functions that modify C arrays whose address is passed from Perl in C
*******************************************************************************
*/

double*  simulate_array(int num_of_elements, int seed) {
  srand(seed); // Seed the random number generator
  double *array = (double *)malloc(num_of_elements * sizeof(double));
  for (int i = 0; i < num_of_elements; i++) {
    array[i] =
        (double)rand() / RAND_MAX; // Generate a random double between 0 and 1
  }
  return array;
}

void map_c_array(double *array, int len) {
  for (int i = 0; i < len; i++) {
    array[i] = cos(sin(sqrt(array[i])));
  }
}

void map_c_array_sequential(double* array, int len) {
  for (int i = 0; i < len; i++) {
    array[i] = sqrt(array[i]);
  }
  for (int i = 0; i < len; i++) {
    array[i] = sin(array[i]);
  }
  for (int i = 0; i < len; i++) {
    array[i] = cos(array[i]);
  }
}

void map_C_array_using_OMP(double* array, int len) {
#pragma omp parallel
  {
#pragma omp for schedule(runtime) nowait
    for (int i = 0; i < len; i++) {
      array[i] = cos(sin(sqrt(array[i])));
    }
  }
}

void map_C_array_sequential_using_OMP(double* array, int len) {
#pragma omp parallel
  {
#pragma omp for schedule(runtime) nowait
    for (int i = 0; i < len; i++) {
      array[i] = sqrt(array[i]);
    }
#pragma omp for schedule(runtime) nowait
    for (int i = 0; i < len; i++) {
      array[i] = sin(array[i]);
    }
#pragma omp for schedule(runtime) nowait
    for (int i = 0; i < len; i++) {
      array[i] = cos(array[i]);
    }
  }
}

重要な問題は、高速フローティング コンパイラ フラグの使用 (コードの精度と速度を引き換えにするトリック) がパフォーマンスに影響を与える可能性があるかどうかです。
このコンパイラ フラグのないメイクファイルは次のとおりです

CC = gcc
CFLAGS = -O3 -ftree-vectorize  -march=native  -Wall -std=gnu11 -fopenmp -fstrict-aliasing 
LDFLAGS = -fPIE -fopenmp
LIBS =  -lm

SOURCES = inplace_array_mod_with_OpenMP.c
OBJECTS = $(SOURCES:.c=_noffmath_gcc.o)
EXECUTABLE = inplace_array_mod_with_OpenMP_noffmath_gcc

all: $(SOURCES) $(EXECUTABLE)

clean:
    rm -f $(OBJECTS) $(EXECUTABLE)

$(EXECUTABLE): $(OBJECTS)
    $(CC) $(LDFLAGS) $(OBJECTS) $(LIBS) -o $@

%_noffmath_gcc.o : %.c 
    $(CC) $(CFLAGS) -c $< -o $@

そして、これがこのフラグを持つものです:

CC = gcc
CFLAGS = -O3 -ftree-vectorize  -march=native -Wall -std=gnu11 -fopenmp -fstrict-aliasing -ffast-math
LDFLAGS = -fPIE -fopenmp
LIBS =  -lm

SOURCES = inplace_array_mod_with_OpenMP.c
OBJECTS = $(SOURCES:.c=_gcc.o)
EXECUTABLE = inplace_array_mod_with_OpenMP_gcc

all: $(SOURCES) $(EXECUTABLE)

clean:
    rm -f $(OBJECTS) $(EXECUTABLE)

$(EXECUTABLE): $(OBJECTS)
    $(CC) $(LDFLAGS) $(OBJECTS) $(LIBS) -o $@

%_gcc.o : %.c 
    $(CC) $(CFLAGS) -c $< -o $@

これら 2 つのプログラムを実行した結果が次のとおりです

  • -ffast-math なし
OMP_SCHEDULE=guided,1 OMP_NUM_THREADS=8 ./inplace_array_mod_with_OpenMP_noffmath_gcc 50000000
Array size: 50000000
Schedule from env guided,1
Number of threads = 8 from within C
Non-sequential calculation time: 1.12 seconds
Sequential calculation time: 0.95 seconds
Parallel calculation using OMP time: 0.17 seconds
Sequential calculation using OMP time: 0.15 seconds
  • -ffast-math を使用する
OMP_SCHEDULE=guided,1 OMP_NUM_THREADS=8 ./inplace_array_mod_with_OpenMP_gcc 50000000
Array size: 50000000
Schedule from env guided,1
Number of threads = 8 from within C
Non-sequential calculation time: 0.27 seconds
Sequential calculation time: 0.28 seconds
Parallel calculation using OMP time: 0.05 seconds
Sequential calculation using OMP time: 0.06 seconds

次のように Numba コードで fastmath を使用できることに注意してください (デフォルトは fastmath=False)。

@njit(nogil=True,fastmath=True)
def compute_inplace_with_numba(array):
    np.sqrt(array,array)
    np.sin(array,array)
    np.cos(array,array)

注目に値するいくつかの点:

  • -ffast-math はパフォーマンスを大幅に向上させます (シングルスレッドコードとマルチスレッドコードの両方で約 300%) が、誤った結果が生成される可能性があります
  • Fastmath は Numba でも動作しますが、正確さを追求するアプリケーションで避けるべきと同じ理由で避けるべきです
  • シーケンシャル C シングル スレッド コードは、シングル スレッド PDL や Numpy と同様のパフォーマンスを提供します
  • 少し驚くべきことに、正しい (高速ではない) 計算を使用すると、シーケンシャル コードは非シーケンシャル コードよりも約 20% 高速になります。
  • 当然のことながら、マルチスレッド コードはシングル スレッド コードより高速です :)
  • numbas が、このかなり単純な関数の C コードよりもどのようにパフォーマンスを 50% 向上させるのか、まだ説明できません。

The above is the detailed content of The Quest for Performance Part III : C Force. For more information, please follow other related articles on the PHP Chinese website!

Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn

Hot AI Tools

Undress AI Tool

Undress AI Tool

Undress images for free

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Clothoff.io

Clothoff.io

AI clothes remover

Video Face Swap

Video Face Swap

Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Tools

Notepad++7.3.1

Notepad++7.3.1

Easy-to-use and free code editor

SublimeText3 Chinese version

SublimeText3 Chinese version

Chinese version, very easy to use

Zend Studio 13.0.1

Zend Studio 13.0.1

Powerful PHP integrated development environment

Dreamweaver CS6

Dreamweaver CS6

Visual web development tools

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

Hot Topics

PHP Tutorial
1503
276
Using std::chrono in C Using std::chrono in C Jul 15, 2025 am 01:30 AM

std::chrono is used in C to process time, including obtaining the current time, measuring execution time, operation time point and duration, and formatting analysis time. 1. Use std::chrono::system_clock::now() to obtain the current time, which can be converted into a readable string, but the system clock may not be monotonous; 2. Use std::chrono::steady_clock to measure the execution time to ensure monotony, and convert it into milliseconds, seconds and other units through duration_cast; 3. Time point (time_point) and duration (duration) can be interoperable, but attention should be paid to unit compatibility and clock epoch (epoch)

What is a POD (Plain Old Data) type in C  ? What is a POD (Plain Old Data) type in C ? Jul 12, 2025 am 02:15 AM

In C, the POD (PlainOldData) type refers to a type with a simple structure and compatible with C language data processing. It needs to meet two conditions: it has ordinary copy semantics, which can be copied by memcpy; it has a standard layout and the memory structure is predictable. Specific requirements include: all non-static members are public, no user-defined constructors or destructors, no virtual functions or base classes, and all non-static members themselves are PODs. For example structPoint{intx;inty;} is POD. Its uses include binary I/O, C interoperability, performance optimization, etc. You can check whether the type is POD through std::is_pod, but it is recommended to use std::is_trivia after C 11.

What is a null pointer in C  ? What is a null pointer in C ? Jul 09, 2025 am 02:38 AM

AnullpointerinC isaspecialvalueindicatingthatapointerdoesnotpointtoanyvalidmemorylocation,anditisusedtosafelymanageandcheckpointersbeforedereferencing.1.BeforeC 11,0orNULLwasused,butnownullptrispreferredforclarityandtypesafety.2.Usingnullpointershe

How to pass a function as a parameter in C  ? How to pass a function as a parameter in C ? Jul 12, 2025 am 01:34 AM

In C, there are three main ways to pass functions as parameters: using function pointers, std::function and Lambda expressions, and template generics. 1. Function pointers are the most basic method, suitable for simple scenarios or C interface compatible, but poor readability; 2. Std::function combined with Lambda expressions is a recommended method in modern C, supporting a variety of callable objects and being type-safe; 3. Template generic methods are the most flexible, suitable for library code or general logic, but may increase the compilation time and code volume. Lambdas that capture the context must be passed through std::function or template and cannot be converted directly into function pointers.

What is an abstract class in C  ? What is an abstract class in C ? Jul 11, 2025 am 12:29 AM

The key to an abstract class is that it contains at least one pure virtual function. When a pure virtual function is declared in the class (such as virtualvoiddoSomething()=0;), the class becomes an abstract class and cannot directly instantiate the object, but polymorphism can be realized through pointers or references; if the derived class does not implement all pure virtual functions, it will also remain an abstract class. Abstract classes are often used to define interfaces or shared behaviors, such as designing Shape classes in drawing applications and implementing the draw() method by derived classes such as Circle and Rectangle. Scenarios using abstract classes include: designing base classes that should not be instantiated directly, forcing multiple related classes to follow a unified interface, providing default behavior, and requiring subclasses to supplement details. In addition, C

How to generate a UUID/GUID in C  ? How to generate a UUID/GUID in C ? Jul 13, 2025 am 02:35 AM

There are three effective ways to generate UUIDs or GUIDs in C: 1. Use the Boost library, which provides multi-version support and is simple to interface; 2. Manually generate Version4UUIDs suitable for simple needs; 3. Use platform-specific APIs (such as Windows' CoCreateGuid), without third-party dependencies. Boost is suitable for most modern projects, manual implementation is suitable for lightweight scenarios, and platform API is suitable for enterprise environments.

What is the mutable keyword in C  ? What is the mutable keyword in C ? Jul 12, 2025 am 03:03 AM

In C, the mutable keyword is used to allow the object to be modified, even if the object is declared as const. Its core purpose is to maintain the logical constants of the object while allowing internal state changes, which are commonly found in cache, debug counters and thread synchronization primitives. When using it, mutable must be placed before the data member in the class definition, and it only applies to data members rather than global or local variables. In best practice, abuse should be avoided, concurrent synchronization should be paid attention to, and external behavior should be ensured. For example, std::shared_ptr uses mutable to manage reference counting to achieve thread safety and const correctness.

How to use OpenCV with C   for image processing? How to use OpenCV with C for image processing? Jul 09, 2025 am 02:22 AM

Using OpenCV and C for image processing is not complicated. You can quickly get started by mastering the basic process and common functions. 1. Installation and environment configuration: Ensure that OpenCV is installed correctly, Linux can be installed with package manager, Windows can use vcpkg or manually configure the path, and test whether it is normal through a simple program; 2. Basic operations of images: use cv::imread() to read, cv::imshow() to display, cv::imwrite() to save images, and pay attention to the necessity of path judgment and waitKey(); 3. Common image processing operations: including grayscale, Gaussian blur, Canny edge detection and threshold processing, which are usually used in the preprocessing stage; 4. Custom convolution kernel

See all articles