This is the mail archive of the gcc@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: Development process for i386 machine descriptions


Hello!

1.) The processor_costs structure seems very limited, but seem very easily to "fill in" but are these costs supposed to be best or worst case? For instance, many instructions with different sized operands vary in latency.

Instruction costs are further refined in config/i386.c, ix86_rtx_costs and the cost for various operand types is determined in several *_cost functions, scattered around i386.c file.


2.) I don't understand the meaning of the stringop_algs, scalar, vector, and branching costs at the end of the processor_cost structure. Could someone give me an accurate description?

stringop_algs is a structure that defines various algorithms for string processing functions (memcpy, memset, ...). This structure also defines size thresholds for various algorithms.


The costs at the end of a cost structure are used in autovectorization decisions, when -fvect-cost-model is in effect (please look at the ehd of i386.h where these values are used).

3.) The processor I am currently attempting to model is single-issue/in-order with a simple pipeline. Stalls can occasionally occur in the fetch/decode/translate, but the core is the latency of instructions in the functional units in the execute stage. What recommendations can anyone make to me for designing the DFA? Should it just directly model the functional units latencies for certain insn types?
Hm, perhaps you should look into {athlon, geode, k6, pentium, ppro}.md files first. All these files define scheduling for various processors. I'm sure that quite some ideas can be harvested there.

Uros.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]