This is the mail archive of the gcc-help@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

avr32 optimization


On avr32 target.

Where does the speed difference come from between the two code
fragments?

struct foo buf[2];

static inline void rx_int(uint8_t ch){
	LED_On(DEBUG_LED);
	//access to buf[ch]
	LED_Off(DEBUG_LED);
}

__attribute__((__interrupt__)) static void can0_int_rx_handler(void){
	rx_int(0);
}

__attribute__((__interrupt__)) static void can1_int_rx_handler(void){
	rx_int(1);
}


-----

struct foo buf[2];

__attribute__((__interrupt__)) static void can0_int_rx_handler(void){
	LED_On(DEBUG_LED);
	//access to buf[0]
	LED_Off(DEBUG_LED);
}

__attribute__((__interrupt__)) static void can1_int_rx_handler(void){
	LED_On(DEBUG_LED);
	//access to buf[1]
	LED_Off(DEBUG_LED);
}

In my case the second version runs 7% faster
Shouldn't GCC in the above case be able to optimize buf[ch] access into
direct accesses?

just curious,
Max Schneider.



Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]